Timezone: »

 
Workshop
Offline Reinforcement Learning
Aviral Kumar · Rishabh Agarwal · George Tucker · Lihong Li · Doina Precup

@ None
Event URL: https://offline-rl-neurips.github.io/ »

The common paradigm in reinforcement learning (RL) assumes that an agent frequently interacts with the environment and learns using its own collected experience. This mode of operation is prohibitive for many complex real-world problems, where repeatedly collecting diverse data is expensive (e.g., robotics or educational agents) and/or dangerous (e.g., healthcare). Alternatively, Offline RL focuses on training agents with logged data in an offline fashion with no further environment interaction. Offline RL promises to bring forward a data-driven RL paradigm and carries the potential to scale up end-to-end learning approaches to real-world decision making tasks such as robotics, recommendation systems, dialogue generation, autonomous driving, healthcare systems and safety-critical applications. Recently, successful deep RL algorithms have been adapted to the offline RL setting and demonstrated a potential for success in a number of domains, however, significant algorithmic and practical challenges remain to be addressed. The goal of this workshop is to bring attention to offline RL, both from within and from outside the RL community discuss algorithmic challenges that need to be addressed, discuss potential real-world applications, discuss limitations and challenges, and come up with concrete problem statements and evaluation protocols, inspired from real-world applications, for the research community to work on.

For details on submission please visit: https://offline-rl-neurips.github.io/ (Submission deadline: October 9, 11:59 pm PT)

Speakers:
Emma Brunskill (Stanford)
Finale Doshi-Velez (Harvard)
John Langford (Microsoft Research)
Nan Jiang (UIUC)
Brandyn White (Waymo Research)
Nando de Freitas (DeepMind)

Author Information

Aviral Kumar (UC Berkeley)
Rishabh Agarwal (Google Research, Brain Team)

I am a research associate in the Google Brain team in Montréal. My research interests mainly revolve around Deep Reinforcement Learning (RL), often with the goal of making RL methods suitable for real-world problems.

George Tucker (Google Brain)
Lihong Li (Google Brain)
Doina Precup (McGill University / Mila / DeepMind Montreal)

More from the Same Authors