Timezone: »

Imitating Past Successes can be Very Suboptimal
Benjamin Eysenbach · Soumith Udatha · Russ Salakhutdinov · Sergey Levine

Tue Nov 29 09:00 AM -- 11:00 AM (PST) @ Hall J #303

Prior work has proposed a simple strategy for reinforcement learning (RL): label experience with the outcomes achieved in that experience, and then imitate the relabeled experience. These outcome-conditioned imitation learning methods are appealing because of their simplicity, strong performance, and close ties with supervised learning. However, it remains unclear how these methods relate to the standard RL objective, reward maximization. In this paper, we prove that existing outcome-conditioned imitation learning methods do not necessarily improve the policy. However, we show that a simple modification results in a method that does guarantee policy improvement. Our aim is not to develop an entirely new method, but rather to explain how a variant of outcome-conditioned imitation learning can be used to maximize rewards

Author Information

Benjamin Eysenbach (CMU)
Benjamin Eysenbach

I'm a 5th year PhD student at CMU, focusing on RL algorithms. I am currently on the faculty job market.

Soumith Udatha (CMU, Carnegie Mellon University)
Russ Salakhutdinov (Carnegie Mellon University)
Sergey Levine (UC Berkeley)

More from the Same Authors