Workshop
|
|
Uncertainty-Driven Pessimistic Q-Ensemble for Offline-to-Online Reinforcement Learning
Ingook Jang · Seonghyun Kim
|
|
Workshop
|
|
Sparse Q-Learning: Offline Reinforcement Learning with Implicit Value Regularization
Haoran Xu · Li Jiang · Li Jianxiong · Zhuoran Yang · Zhaoran Wang · Xianyuan Zhan
|
|
Poster
|
Thu 14:00
|
LobsDICE: Offline Learning from Observation via Stationary Distribution Correction Estimation
Geon-Hyeong Kim · Jongmin Lee · Youngsoo Jang · Hongseok Yang · Kee-Eung Kim
|
|
Workshop
|
|
Offline evaluation in RL: soft stability weighting to combine fitted Q-learning and model-based methods
Briton Park · Xian Wu · Bin Yu · Angela Zhou
|
|
Workshop
|
|
Raisin: Residual Algorithms for Versatile Offline Reinforcement Learning
Braham Snyder · Yuke Zhu
|
|
Workshop
|
|
Benchmarking Offline Reinforcement Learning Algorithms for E-Commerce Order Fraud Evaluation
Soysal Degirmenci · Christopher S Jones
|
|
Workshop
|
|
Offline Robot Reinforcement Learning with Uncertainty-Guided Human Expert Sampling
Ashish Kumar · Ilya Kuzovkin
|
|
Workshop
|
|
Offline Robot Reinforcement Learning with Uncertainty-Guided Human Expert Sampling
Ashish Kumar · Ilya Kuzovkin
|
|
Workshop
|
|
CLaP: Conditional Latent Planners for Offline Reinforcement Learning
Harry Shin · Rose Wang
|
|
Poster
|
Tue 14:00
|
On Gap-dependent Bounds for Offline Reinforcement Learning
Xinqi Wang · Qiwen Cui · Simon Du
|
|
Poster
|
Wed 9:00
|
Towards Learning Universal Hyperparameter Optimizers with Transformers
Yutian Chen · Xingyou Song · Chansoo Lee · Zi Wang · Richard Zhang · David Dohan · Kazuya Kawakami · Greg Kochanski · Arnaud Doucet · Marc'Aurelio Ranzato · Sagi Perel · Nando de Freitas
|
|
Workshop
|
|
Offline Reinforcement Learning from Heteroskedastic Data Via Support Constraints
Anikait Singh · Aviral Kumar · Quan Vuong · Yevgen Chebotar · Sergey Levine
|
|