Skip to yearly menu bar Skip to main content


Search All 2023 Events
 

896 Results

<<   <   Page 2 of 75   >   >>
Poster
Tue 8:45 Constructing Non-isotropic Gaussian Diffusion Model Using Isotropic Gaussian Diffusion Model for Image Editing
Xi Yu · Xiang Gu · Haozhi Liu · Jian Sun
Poster
Tue 15:15 Pretraining task diversity and the emergence of non-Bayesian in-context learning for regression
Allan Raventós · Mansheej Paul · Feng Chen · Surya Ganguli
Workshop
Sat 14:45 Model-adapted Fourier sampling for generative compressed sensing
Aaron Berk · Simone Brugiapaglia · Yaniv Plan · Matthew Scott · Xia Sheng · Ozgur Yilmaz
Poster
Tue 15:15 Change point detection and inference in multivariate non-parametric models under mixing conditions
Carlos Misael Madrid Padilla · Haotian Xu · Daren Wang · OSCAR HERNAN MADRID PADILLA · Yi Yu
Workshop
Free from Bellman Completeness: Trajectory Stitching via Model-based Return-conditioned Supervised Learning
Zhaoyi Zhou · Chuning Zhu · Runlong Zhou · Qiwen Cui · Abhishek Gupta · Simon Du
Workshop
Double Policy Estimation for Importance Sampling in Sequence Modeling-Based Reinforcement Learning
Hanhan Zhou · Tian Lan · Vaneet Aggarwal
Poster
Wed 15:00 MoVie: Visual Model-Based Policy Adaptation for View Generalization
Sizhe Yang · Yanjie Ze · Yanjie Ze · Huazhe Xu
Workshop
Non-Uniform Sampling and Adaptive Optimizers in Deep Learning
Thibault Lahire
Workshop
Fri 9:40 Free from Bellman Completeness: Trajectory Stitching via Model-based Return-conditioned Supervised Learning
Workshop
Free from Bellman Completeness: Trajectory Stitching via Model-based Return-conditioned Supervised Learning
Zhaoyi Zhou · Chuning Zhu · Runlong Zhou · Qiwen Cui · Abhishek Gupta · Simon Du
Workshop
Particle Guidance: non-I.I.D. Diverse Sampling with Diffusion Models
Gabriele Corso · Yilun Xu · Valentin De Bortoli · Regina Barzilay · Tommi Jaakkola
Workshop
Double Policy Estimation for Importance Sampling in Sequence Modeling-Based Reinforcement Learning
Hanhan Zhou · Tian Lan · Vaneet Aggarwal