Spotlight
Policy Continuation with Hindsight Inverse Dynamics
Hao Sun · Zhizhong Li · Xiaotong Liu · Bolei Zhou · Dahua Lin

Wed Dec 11th 10:35 -- 10:40 AM @ West Exhibition Hall B

Solving goal-oriented tasks is an important but challenging problem in reinforcement learning (RL). For such tasks, the rewards are often sparse, making it difficult to learn a policy effectively. To tackle this difficulty, we propose a new approach called Policy Continuation with Hindsight Inverse Dynamics (PCHID). This approach learns from Hindsight Inverse Dynamics based on Hindsight Experience Replay. Enabling the learning process in a self-imitated manner and thus can be trained with supervised learning. This work also extends it to multi-step settings with Policy Continuation. The proposed method is general, which can work in isolation or be combined with other on-policy and off-policy algorithms. On two multi-goal tasks GridWorld and FetchReach, PCHID significantly improves the sample efficiency as well as the final performance.

Author Information

Hao Sun (CUHK)
zz Li (The Chinese University of Hong Kong)
Xiaotong Liu (Peking Uinversity)
Bolei Zhou (CUHK)
Dahua Lin (The Chinese University of Hong Kong)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors