Timezone: »

 
MHER: Model-based Hindsight Experience Replay
Yang Rui · Meng Fang · Lei Han · Yali Du · Feng Luo · Xiu Li
Event URL: https://openreview.net/forum?id=3zsx-jhn2LM »

Solving multi-goal reinforcement learning (RL) problems with sparse rewards is generally challenging. Existing approaches have utilized goal relabeling on collected experiences to alleviate issues raised from sparse rewards. However, these methods are still limited in efficiency and cannot make full use of experiences. In this paper, we propose Model-based Hindsight Experience Replay (MHER), which exploits experiences more efficiently by leveraging environmental dynamics to generate virtual achieved goals. Replacing original goals with virtual goals generated from interaction with a trained dynamics model leads to a novel relabeling method, model-based relabeling (MBR). Based on MBR, MHER performs both reinforcement learning and supervised learning for efficient policy improvement. Theoretically, we also prove the supervised part in MHER, i.e., goal-conditioned supervised learning with MBR data, optimizes a lower bound on the multi-goal RL objective. Experimental results in several point-based tasks and simulated robotics environments show that MHER achieves significantly higher sample efficiency than previous model-free and model-based multi-goal methods.

Author Information

Yang Rui (Tsinghua University)
Meng Fang (Tencent)
Lei Han (Tencent AI Lab)
Yali Du (University College London)

I am currently a research fellow at UCL. I am interested in multi-agent reinforcement learning, adversarial machine learning and recommendation systems.

Feng Luo (Tsinghua University, Tsinghua University)
Xiu Li

More from the Same Authors