Timezone: »
In Model-based Reinforcement Learning (MBRL), model learning is critical since an inaccurate model can bias policy learning via generating misleading samples. However, learning an accurate model can be difficult since the policy is continually updated and the induced distribution over visited states used for model learning shifts accordingly. Prior methods alleviate this issue by quantifying the uncertainty of model-generated samples. However, these methods only quantify the uncertainty passively after the samples were generated, rather than foreseeing the uncertainty before model trajectories fall into those highly uncertain regions. The resulting low-quality samples can induce unstable learning targets and hinder the optimization of the policy. Moreover, while being learned to minimize one-step prediction errors, the model is generally used to predict for multiple steps, leading to a mismatch between the objectives of model learning and model usage. To this end, we propose Plan To Predict (P2P), an MBRL framework that treats the model rollout process as a sequential decision making problem by reversely considering the model as a decision maker and the current policy as the dynamics. In this way, the model can quickly adapt to the current policy and foresee the multi-step future uncertainty when generating trajectories. Theoretically, we show that the performance of P2P can be guaranteed by approximately optimizing a lower bound of the true environment return. Empirical results demonstrate that P2P achieves state-of-the-art performance on several challenging benchmark tasks.
Author Information
Zifan Wu (Sun Yat-sen University)
Chao Yu (Sun Yat-sen University)
Chen Chen (Academy of Mathematics and Systems Science, Chinese Academy of Sciences)
Jianye Hao (Tianjin University)
Hankz Hankui Zhuo (Sun Yat-sen University)
More from the Same Authors
-
2022 Poster: Multiagent Q-learning with Sub-Team Coordination »
Wenhan Huang · Kai Li · Kun Shao · Tianze Zhou · Matthew Taylor · Jun Luo · Dongge Wang · Hangyu Mao · Jianye Hao · Jun Wang · Xiaotie Deng -
2022 Poster: A Unified Diversity Measure for Multiagent Reinforcement Learning »
Zongkai Liu · Chao Yu · Yaodong Yang · peng sun · Zifan Wu · Yuan Li -
2022 Poster: Transformer-based Working Memory for Multiagent Reinforcement Learning with Action Parsing »
Yaodong Yang · Guangyong Chen · Weixun Wang · Xiaotian Hao · Jianye Hao · Pheng-Ann Heng -
2022 Poster: Versatile Multi-stage Graph Neural Network for Circuit Representation »
shuwen yang · Zhihao Yang · Dong Li · Yingxueff Zhang · Zhanguang Zhang · Guojie Song · Jianye Hao -
2022 : Towards A Unified Policy Abstraction Theory and Representation Learning Approach in Markov Decision Processes »
Min Zhang · Hongyao Tang · Jianye Hao · YAN ZHENG -
2022 : EUCLID: Towards Efficient Unsupervised Reinforcement Learning with Multi-choice Dynamics Model »
Yifu Yuan · Jianye Hao · Fei Ni · Yao Mu · YAN ZHENG · Yujing Hu · Jinyi Liu · Yingfeng Chen · Changjie Fan -
2022 : ERL-Re$^2$: Efficient Evolutionary Reinforcement Learning with Shared State Representation and Individual Policy Representation »
Pengyi Li · Hongyao Tang · Jianye Hao · YAN ZHENG · Xian Fu · Zhaopeng Meng -
2022 : Planning Immediate Landmarks of Targets for Model-Free Skill Transfer across Agents »
Minghuan Liu · Zhengbang Zhu · Menghui Zhu · Yuzheng Zhuang · Weinan Zhang · Jianye Hao -
2022 Spotlight: Lightning Talks 5A-3 »
Minting Pan · Xiang Chen · Wenhan Huang · Can Chang · Zhecheng Yuan · Jianzhun Shao · Yushi Cao · Peihao Chen · Ke Xue · Zhengrong Xue · Zhiqiang Lou · Xiangming Zhu · Lei Li · Zhiming Li · Kai Li · Jiacheng Xu · Dongyu Ji · Ni Mu · Kun Shao · Tianpei Yang · Kunyang Lin · Ningyu Zhang · Yunbo Wang · Lei Yuan · Bo Yuan · Hongchang Zhang · Jiajun Wu · Tianze Zhou · Xueqian Wang · Ling Pan · Yuhang Jiang · Xiaokang Yang · Xiaozhuan Liang · Hao Zhang · Weiwen Hu · Miqing Li · YAN ZHENG · Matthew Taylor · Huazhe Xu · Shumin Deng · Chao Qian · YI WU · Shuncheng He · Wenbing Huang · Chuanqi Tan · Zongzhang Zhang · Yang Gao · Jun Luo · Yi Li · Xiangyang Ji · Thomas Li · Mingkui Tan · Fei Huang · Yang Yu · Huazhe Xu · Dongge Wang · Jianye Hao · Chuang Gan · Yang Liu · Luo Si · Hangyu Mao · Huajun Chen · Jianye Hao · Jun Wang · Xiaotie Deng -
2022 Spotlight: Plan To Predict: Learning an Uncertainty-Foreseeing Model For Model-Based Reinforcement Learning »
Zifan Wu · Chao Yu · Chen Chen · Jianye Hao · Hankz Hankui Zhuo -
2022 Spotlight: DOMINO: Decomposed Mutual Information Optimization for Generalized Context in Meta-Reinforcement Learning »
Yao Mu · Yuzheng Zhuang · Fei Ni · Bin Wang · Jianyu Chen · Jianye Hao · Ping Luo -
2022 Spotlight: GALOIS: Boosting Deep Reinforcement Learning via Generalizable Logic Synthesis »
Yushi Cao · Zhiming Li · Tianpei Yang · Hao Zhang · YAN ZHENG · Yi Li · Jianye Hao · Yang Liu -
2022 Spotlight: Multiagent Q-learning with Sub-Team Coordination »
Wenhan Huang · Kai Li · Kun Shao · Tianze Zhou · Matthew Taylor · Jun Luo · Dongge Wang · Hangyu Mao · Jianye Hao · Jun Wang · Xiaotie Deng -
2022 Spotlight: Lightning Talks 5A-1 »
Yao Mu · Jin Zhang · Haoyi Niu · Rui Yang · Mingdong Wu · Ze Gong · Shubham Sharma · Chenjia Bai · Yu ("Tony") Zhang · Siyuan Li · Yuzheng Zhuang · Fangwei Zhong · Yiwen Qiu · Xiaoteng Ma · Fei Ni · Yulong Xia · Chongjie Zhang · Hao Dong · Ming Li · Zhaoran Wang · Bin Wang · Chongjie Zhang · Jianyu Chen · Guyue Zhou · Lei Han · Jianming HU · Jianye Hao · Xianyuan Zhan · Ping Luo -
2022 Spotlight: Lightning Talks 3A-2 »
shuwen yang · Xu Zhang · Delvin Ce Zhang · Lan-Zhe Guo · Renzhe Xu · Zhuoer Xu · Yao-Xiang Ding · Weihan Li · Xingxuan Zhang · Xi-Zhu Wu · Zhenyuan Yuan · Hady Lauw · Yu Qi · Yi-Ge Zhang · Zhihao Yang · Guanghui Zhu · Dong Li · Changhua Meng · Kun Zhou · Gang Pan · Zhi-Fan Wu · Bo Li · Minghui Zhu · Zhi-Hua Zhou · Yafeng Zhang · Yingxueff Zhang · shiwen cui · Jie-Jing Shao · Zhanguang Zhang · Zhenzhe Ying · Xiaolong Chen · Yu-Feng Li · Guojie Song · Peng Cui · Weiqiang Wang · Ming GU · Jianye Hao · Yihua Huang -
2022 Spotlight: Versatile Multi-stage Graph Neural Network for Circuit Representation »
shuwen yang · Zhihao Yang · Dong Li · Yingxueff Zhang · Zhanguang Zhang · Guojie Song · Jianye Hao -
2022 Poster: GALOIS: Boosting Deep Reinforcement Learning via Generalizable Logic Synthesis »
Yushi Cao · Zhiming Li · Tianpei Yang · Hao Zhang · YAN ZHENG · Yi Li · Jianye Hao · Yang Liu -
2022 Poster: DOMINO: Decomposed Mutual Information Optimization for Generalized Context in Meta-Reinforcement Learning »
Yao Mu · Yuzheng Zhuang · Fei Ni · Bin Wang · Jianyu Chen · Jianye Hao · Ping Luo -
2022 Poster: The Policy-gradient Placement and Generative Routing Neural Networks for Chip Design »
Ruoyu Cheng · Xianglong Lyu · Yang Li · Junjie Ye · Jianye Hao · Junchi Yan -
2021 Poster: Coordinated Proximal Policy Optimization »
Zifan Wu · Chao Yu · Deheng Ye · Junge Zhang · haiyin piao · Hankz Hankui Zhuo -
2020 Poster: Learning to Utilize Shaping Rewards: A New Approach of Reward Shaping »
Yujing Hu · Weixun Wang · Hangtian Jia · Yixiang Wang · Yingfeng Chen · Jianye Hao · Feng Wu · Changjie Fan -
2018 Poster: A Deep Bayesian Policy Reuse Approach Against Non-Stationary Agents »
YAN ZHENG · Zhaopeng Meng · Jianye Hao · Zongzhang Zhang · Tianpei Yang · Changjie Fan -
2012 Poster: Action-Model Based Multi-agent Plan Recognition »
Hankz Hankui Zhuo · Qiang Yang · Subbarao Kambhampati