`

Timezone: »

 
Poster
Model-Based Reinforcement Learning via Imagination with Derived Memory
Yao Mu · Yuzheng Zhuang · Bin Wang · Guangxiang Zhu · Wulong Liu · Jianyu Chen · Ping Luo · Shengbo Li · Chongjie Zhang · Jianye Hao

Thu Dec 09 12:30 AM -- 02:00 AM (PST) @ None #None

Model-based reinforcement learning aims to improve the sample efficiency of policy learning by modeling the dynamics of the environment. Recently, the latent dynamics model is further developed to enable fast planning in a compact space. It summarizes the high-dimensional experiences of an agent, which mimics the memory function of humans. Learning policies via imagination with the latent model shows great potential for solving complex tasks. However, only considering memories from the true experiences in the process of imagination could limit its advantages. Inspired by the memory prosthesis proposed by neuroscientists, we present a novel model-based reinforcement learning framework called Imagining with Derived Memory (IDM). It enables the agent to learn policy from enriched diverse imagination with prediction-reliability weight, thus improving sample efficiency and policy robustness. Experiments on various high-dimensional visual control tasks in the DMControl benchmark demonstrate that IDM outperforms previous state-of-the-art methods in terms of policy robustness and further improves the sample efficiency of the model-based method.

Author Information

Yao Mu (The University of Hong Kong)
Yuzheng Zhuang (Huawei Technologies Co. Ltd.)
Bin Wang (Huawei Noah's Ark Lab)
Guangxiang Zhu (Tsinghua university)
Wulong Liu (Huawei Noah's Ark Lab)
Jianyu Chen (Tsinghua University)
Ping Luo (The University of Hong Kong)
Shengbo Li (Tsinghua University, Tsinghua University)
Chongjie Zhang (Tsinghua University)
Jianye Hao (Tianjin University)

More from the Same Authors