Timezone: »

 
Implicit Offline Reinforcement Learning via Supervised Learning
Alexandre Piche · Rafael Pardinas · David Vazquez · Igor Mordatch · Igor Mordatch · Chris Pal
Event URL: https://openreview.net/forum?id=UeE5nCxuLd4 »

Offline Reinforcement Learning (RL) via Supervised Learning is a simple and effective way to learn robotic skills from a dataset of varied behaviors. It is as simple as supervised learning and Behavior Cloning (BC) but takes advantage of the return information. On BC tasks, implicit models have been shown to match or outperform explicit ones. Despite the benefits of using implicit models to learn robotic skills via BC, Offline RL via Supervised Learning algorithms have been limited to explicit models. We show how implicit models leverage return information and match or outperform explicit algorithms to acquire robotic skills from fixed datasets. Furthermore, we show how closely related our implicit methods are to other popular RL via Supervised Learning algorithms.

Author Information

Alexandre Piche (Mila)
Rafael Pardinas (ServiceNow Research)
David Vazquez (ServiceNow)
Igor Mordatch (Google)
Igor Mordatch (Research, Google)
Chris Pal (Montreal Institute for Learning Algorithms, École Polytechnique, Université de Montréal)

More from the Same Authors