Poster
|
Wed 14:00
|
Mismatched No More: Joint Model-Policy Optimization for Model-Based RL
Benjamin Eysenbach · Alexander Khazatsky · Sergey Levine · Russ Salakhutdinov
|
|
Poster
|
Wed 9:00
|
Conservative Dual Policy Optimization for Efficient Model-Based Reinforcement Learning
Shenao Zhang
|
|
Poster
|
Thu 9:00
|
Towards Safe Reinforcement Learning with a Safety Editor Policy
Haonan Yu · Wei Xu · Haichao Zhang
|
|
Poster
|
Wed 14:00
|
Model-based Safe Deep Reinforcement Learning via a Constrained Proximal Policy Optimization Algorithm
Ashish K Jayant · Shalabh Bhatnagar
|
|
Poster
|
Tue 9:00
|
Learning Generalized Policy Automata for Relational Stochastic Shortest Path Problems
Rushang Karia · Rashmeet Kaur Nayyar · Siddharth Srivastava
|
|
Poster
|
Tue 9:00
|
A Unified Framework for Alternating Offline Model Training and Policy Learning
Shentao Yang · Shujian Zhang · Yihao Feng · Mingyuan Zhou
|
|
Poster
|
Wed 14:00
|
Online Reinforcement Learning for Mixed Policy Scopes
Junzhe Zhang · Elias Bareinboim
|
|
Workshop
|
|
Meta-Learning General-Purpose Learning Algorithms with Transformers
Louis Kirsch · Luke Metz · James Harrison · Jascha Sohl-Dickstein
|
|
Workshop
|
|
Domain Invariant Q-Learning for model-free robust continuous control under visual distractions
Tom Dupuis · Jaonary Rabarisoa · Quoc Cuong PHAM · David Filliat
|
|
Poster
|
Thu 9:00
|
MoCoDA: Model-based Counterfactual Data Augmentation
Silviu Pitis · Elliot Creager · Ajay Mandlekar · Animesh Garg
|
|
Poster
|
|
Model-Based Opponent Modeling
XiaoPeng Yu · Jiechuan Jiang · Wanpeng Zhang · Haobin Jiang · Zongqing Lu
|
|
Workshop
|
|
Offline evaluation in RL: soft stability weighting to combine fitted Q-learning and model-based methods
Briton Park · Xian Wu · Bin Yu · Angela Zhou
|
|