Timezone: »

 
Poster
RL Unplugged: A Collection of Benchmarks for Offline Reinforcement Learning
Caglar Gulcehre · Ziyu Wang · Alexander Novikov · Thomas Paine · Sergio Gómez · Konrad Zolna · Rishabh Agarwal · Josh Merel · Daniel Mankowitz · Cosmin Paduraru · Gabriel Dulac-Arnold · Jerry Li · Mohammad Norouzi · Matthew Hoffman · Nicolas Heess · Nando de Freitas

Wed Dec 09 09:00 AM -- 11:00 AM (PST) @ Poster Session 3 #835

Offline methods for reinforcement learning have a potential to help bridge the gap between reinforcement learning research and real-world applications. They make it possible to learn policies from offline datasets, thus overcoming concerns associated with online data collection in the real-world, including cost, safety, or ethical concerns. In this paper, we propose a benchmark called RL Unplugged to evaluate and compare offline RL methods. RL Unplugged includes data from a diverse range of domains including games e.g., Atari benchmark) and simulated motor control problems (e.g., DM Control Suite). The datasets include domains that are partially or fully observable, use continuous or discrete actions, and have stochastic vs. deterministic dynamics. We propose detailed evaluation protocols for each domain in RL Unplugged and provide an extensive analysis of supervised learning and offline RL methods using these protocols. We will release data for all our tasks and open-source all algorithms presented in this paper. We hope that our suite of benchmarks will increase the reproducibility of experiments and make it possible to study challenging tasks with a limited computational budget, thus making RL research both more systematic and more accessible across the community. Moving forward, we view RL Unplugged as a living benchmark suite that will evolve and grow with datasets contributed by the research community and ourselves. Our project page is available on github.

Author Information

CAGLAR Gulcehre (Deepmind)
Ziyu Wang (Deepmind)
Alexander Novikov (DeepMind)
Thomas Paine (DeepMind)
Sergio Gómez (DeepMind)
Konrad Zolna (DeepMind)
Rishabh Agarwal (Google Research, Brain Team)

I am a research associate in the Google Brain team in Montréal. My research interests mainly revolve around Deep Reinforcement Learning (RL), often with the goal of making RL methods suitable for real-world problems.

Josh Merel (DeepMind)
Daniel Mankowitz (DeepMind)
Cosmin Paduraru (DeepMind)
Gabriel Dulac-Arnold (Google Research)
Jerry Li (Deepmind)

Industrial researcher specializes in Generative models, style transfer, and RL.

Mohammad Norouzi (Google Brain)
Matthew Hoffman (DeepMind)
Nicolas Heess (Google DeepMind)
Nando de Freitas (DeepMind)

More from the Same Authors