Timezone: »
Model-based reinforcement learning methods learn a dynamics model with real data sampled from the environment and leverage it to generate simulated data to derive an agent. However, due to the potential distribution mismatch between simulated data and real data, this could lead to degraded performance. Despite much effort being devoted to reducing this distribution mismatch, existing methods fail to solve it explicitly. In this paper, we investigate how to bridge the gap between real and simulated data due to inaccurate model estimation for better policy optimization. To begin with, we first derive a lower bound of the expected return, which naturally inspires a bound maximization algorithm by aligning the simulated and real data distributions. To this end, we propose a novel model-based reinforcement learning framework AMPO, which introduces unsupervised model adaptation to minimize the integral probability metric (IPM) between feature distributions from real and simulated data. Instantiating our framework with Wasserstein-1 distance gives a practical model-based approach. Empirically, our approach achieves state-of-the-art performance in terms of sample efficiency on a range of continuous control benchmark tasks.
Author Information
Jian Shen (Shanghai Jiao Tong University)
Han Zhao (University of Illinois at Urbana-Champaign)
Weinan Zhang (Shanghai Jiao Tong University)
Yong Yu (Shanghai Jiao Tong Unviersity)
Related Events (a corresponding poster, oral, or spotlight)
-
2020 Spotlight: Model-based Policy Optimization with Unsupervised Model Adaptation »
Tue. Dec 8th 04:00 -- 04:10 PM Room Orals & Spotlights: Reinforcement Learning
More from the Same Authors
-
2021 : Robust and Personalized Federated Learning with Spurious Features: an Adversarial Approach »
Xiaoyang Wang · Han Zhao · Klara Nahrstedt · Sanmi Koyejo -
2021 Poster: Curriculum Offline Imitating Learning »
Minghuan Liu · Hanye Zhao · Zhengyu Yang · Jian Shen · Weinan Zhang · Li Zhao · Tie-Yan Liu -
2021 Poster: On Effective Scheduling of Model-based Reinforcement Learning »
Hang Lai · Jian Shen · Weinan Zhang · Yimin Huang · Xing Zhang · Ruiming Tang · Yong Yu · Zhenguo Li -
2020 Poster: Efficient Projection-free Algorithms for Saddle Point Problems »
Cheng Chen · Luo Luo · Weinan Zhang · Yong Yu -
2020 Poster: Trade-offs and Guarantees of Adversarial Representation Learning for Information Obfuscation »
Han Zhao · Jianfeng Chi · Yuan Tian · Geoffrey Gordon -
2020 Poster: Domain Adaptation with Conditional Distribution Matching and Generalized Label Shift »
Remi Tachet des Combes · Han Zhao · Yu-Xiang Wang · Geoffrey Gordon -
2020 Poster: Neural Methods for Point-wise Dependency Estimation »
Yao-Hung Hubert Tsai · Han Zhao · Makoto Yamada · Louis-Philippe Morency · Russ Salakhutdinov -
2020 Spotlight: Neural Methods for Point-wise Dependency Estimation »
Yao-Hung Hubert Tsai · Han Zhao · Makoto Yamada · Louis-Philippe Morency · Russ Salakhutdinov -
2017 Demonstration: MAgent: A Many-Agent Reinforcement Learning Research Platform for Artificial Collective Intelligence »
Lianmin Zheng · Jiacheng Yang · Han Cai · Weinan Zhang · Jun Wang · Yong Yu -
2008 Poster: Translated Learning »
Wenyuan Dai · Yuqiang Chen · Gui-Rong Xue · Qiang Yang · Yong Yu -
2008 Spotlight: Translated Learning »
Wenyuan Dai · Yuqiang Chen · Gui-Rong Xue · Qiang Yang · Yong Yu