This is the public, feature-limited version of the conference webpage. After Registration and login please visit the full version.

Workshop: Offline Reinforcement Learning

Aviral Kumar, Rishabh Agarwal, George Tucker, Lihong Li, Doina Precup, Aviral Kumar

Sat, Dec 12th @ 17:00 GMT – Sun, Dec 13th @ 02:00 GMT
Abstract: The common paradigm in reinforcement learning (RL) assumes that an agent frequently interacts with the environment and learns using its own collected experience. This mode of operation is prohibitive for many complex real-world problems, where repeatedly collecting diverse data is expensive (e.g., robotics or educational agents) and/or dangerous (e.g., healthcare). Alternatively, Offline RL focuses on training agents with logged data in an offline fashion with no further environment interaction. Offline RL promises to bring forward a data-driven RL paradigm and carries the potential to scale up end-to-end learning approaches to real-world decision making tasks such as robotics, recommendation systems, dialogue generation, autonomous driving, healthcare systems and safety-critical applications. Recently, successful deep RL algorithms have been adapted to the offline RL setting and demonstrated a potential for success in a number of domains, however, significant algorithmic and practical challenges remain to be addressed. The goal of this workshop is to bring attention to offline RL, both from within and from outside the RL community discuss algorithmic challenges that need to be addressed, discuss potential real-world applications, discuss limitations and challenges, and come up with concrete problem statements and evaluation protocols, inspired from real-world applications, for the research community to work on.

For details on submission please visit: https://offline-rl-neurips.github.io/ (Submission deadline: October 9, 11:59 pm PT)

Speakers:
Emma Brunskill (Stanford)
Finale Doshi-Velez (Harvard)
John Langford (Microsoft Research)
Nan Jiang (UIUC)
Brandyn White (Waymo Research)
Nando de Freitas (DeepMind)

Chat

To ask questions please use rocketchat, available only upon registration and login.

Schedule

16:50 – 17:00 GMT
Introduction
Aviral Kumar, George Tucker, Rishabh Agarwal
17:00 – 17:30 GMT
Offline RL
Nando de Freitas
17:30 – 17:40 GMT
Q&A w/ Nando de Freitas
17:40 – 17:50 GMT
Contributed Talk 1: Offline Reinforcement Learning by Solving Derived Non-Parametric MDPs
Aayam Shrestha
17:50 – 18:00 GMT
Contributed Talk 2: Chaining Behaviors from Data with Model-Free Reinforcement Learning
Avi Singh
18:00 – 18:10 GMT
Contributed Talk 3: Addressing Distribution Shift in Online Reinforcement Learning with Offline Datasets
Seunghyun Lee, Younggyo Seo, Kimin Lee
18:10 – 18:20 GMT
Contributed Talk 4: Addressing Extrapolation Error in Deep Offline Reinforcement Learning
Caglar Gulcehre
18:20 – 18:30 GMT
Q/A for Contributed Talks 1
18:30 – 19:20 GMT
Poster Session 1 (gather.town)
19:20 – 19:50 GMT
Causal Structure Discovery in RL
John Langford
19:50 – 20:00 GMT
Q&A w/ John Langford
20:00 – 21:00 GMT
Panel
Emma Brunskill, Nan Jiang, Nando de Freitas, Finale Doshi-Velez, Sergey Levine, John Langford, Lihong Li, George Tucker, Rishabh Agarwal, Aviral Kumar
21:10 – 21:40 GMT
Learning a Multi-Agent Simulator from Offline Demonstrations
Brandyn White, Brandyn White
21:40 – 21:50 GMT
Q&A w/ Brandyn White
21:50 – 22:20 GMT
Towards Reliable Validation and Evaluation for Offline RL
Nan Jiang
22:20 – 22:30 GMT
Q&A w/ Nan Jiang
22:30 – 22:40 GMT
Contributed Talk 5: Latent Action Space for Offline Reinforcement Learning
Wenxuan Zhou
22:40 – 22:50 GMT
Contributed Talk 6: What are the Statistical Limits for Batch RL with Linear Function Approximation?
Ruosong Wang
22:50 – 23:00 GMT
Contributed Talk 7: Distilled Thompson Sampling: Practical and Efficient Thompson Sampling via Imitation Learning
Hong Namkoong
23:00 – 23:10 GMT
Contributed Talk 8: Batch-Constrained Distributional Reinforcement Learning for Session-based Recommendation
Diksha Garg
Sat, Dec 12th @ 23:15 GMT – Sun, Dec 13th @ 00:30 GMT
Poster Session 2 (gather.town)
00:30 – 01:00 GMT
Counterfactuals and Offline RL
Emma Brunskill
01:00 – 01:10 GMT
Q&A w/ Emma Brunskill
01:10 – 01:40 GMT
Batch RL Models Built for Validation
Finale Doshi-Velez
01:40 – 01:50 GMT
Q&A w/ Finale Doshi-Velez
01:50 – 02:00 GMT
Closing Remarks