Poster
Hindsight Experience Replay
Marcin Andrychowicz · Filip Wolski · Alex Ray · Jonas Schneider · Rachel Fong · Peter Welinder · Bob McGrew · Josh Tobin · OpenAI Pieter Abbeel · Wojciech Zaremba

Tue Dec 5th 06:30 -- 10:30 PM @ Pacific Ballroom #199 #None

Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task. The video presenting our experiments is available at https://goo.gl/SMrQnI.

Author Information

Marcin Andrychowicz (OpenAI)
Filip Wolski (OpenAI)
Alex Ray (OpenAI)
Jonas Schneider (OpenAI)
rfong Fong (OpenAI)
Peter Welinder (OpenAI)
Bob McGrew (OpenAI)
Josh Tobin (OpenAI)
OpenAI Pieter Abbeel (OpenAI, UC Berkeley)
Wojciech Zaremba (OpenAI)

More from the Same Authors