Timezone: »
Model-based reinforcement learning methods learn a dynamics model with real data sampled from the environment and leverage it to generate simulated data to derive an agent. However, due to the potential distribution mismatch between simulated data and real data, this could lead to degraded performance. Despite much effort being devoted to reducing this distribution mismatch, existing methods fail to solve it explicitly. In this paper, we investigate how to bridge the gap between real and simulated data due to inaccurate model estimation for better policy optimization. To begin with, we first derive a lower bound of the expected return, which naturally inspires a bound maximization algorithm by aligning the simulated and real data distributions. To this end, we propose a novel model-based reinforcement learning framework AMPO, which introduces unsupervised model adaptation to minimize the integral probability metric (IPM) between feature distributions from real and simulated data. Instantiating our framework with Wasserstein-1 distance gives a practical model-based approach. Empirically, our approach achieves state-of-the-art performance in terms of sample efficiency on a range of continuous control benchmark tasks.
Author Information
Jian Shen (Shanghai Jiao Tong University)
Han Zhao (University of Illinois at Urbana-Champaign)
Weinan Zhang (Shanghai Jiao Tong University)
Yong Yu (Shanghai Jiao Tong Unviersity)
Related Events (a corresponding poster, oral, or spotlight)
-
2020 Spotlight: Model-based Policy Optimization with Unsupervised Model Adaptation »
Tue. Dec 8th 04:00 -- 04:10 PM Room Orals & Spotlights: Reinforcement Learning
More from the Same Authors
-
2021 : Robust and Personalized Federated Learning with Spurious Features: an Adversarial Approach »
Xiaoyang Wang · Han Zhao · Klara Nahrstedt · Sanmi Koyejo -
2022 Poster: Learning Enhanced Representation for Tabular Data via Neighborhood Propagation »
Kounianhua Du · Weinan Zhang · Ruiwen Zhou · Yangkun Wang · Xilong Zhao · Jiarui Jin · Quan Gan · Zheng Zhang · David P Wipf -
2022 : Visual Imitation Learning with Patch Rewards »
Minghuan Liu · Tairan He · Weinan Zhang · Shuicheng Yan · Zhongwen Xu -
2022 : Planning Immediate Landmarks of Targets for Model-Free Skill Transfer across Agents »
Minghuan Liu · Zhengbang Zhu · Menghui Zhu · Yuzheng Zhuang · Weinan Zhang · Jianye Hao -
2023 Poster: Lending Interaction Wings to Recommender Systems with Conversational Agents »
Jiarui Jin · Xianyu Chen · Fanghua Ye · Mengyue Yang · Yue Feng · Weinan Zhang · Yong Yu · Jun Wang -
2023 Poster: Diffusion Model is an Effective Planner and Data Synthesizer for Multi-Task Reinforcement Learning »
Haoran He · Chenjia Bai · Kang Xu · Zhuoran Yang · Weinan Zhang · Dong Wang · Bin Zhao · Xuelong Li -
2023 Poster: Fundamental Limits and Tradeoffs in Invariant Representation Learning »
Han Zhao · Chen Dan · Bryon Aragam · Tommi Jaakkola · Geoffrey Gordon · Pradeep Ravikumar -
2022 Poster: Honor of Kings Arena: an Environment for Generalization in Competitive Reinforcement Learning »
Hua Wei · Jingxiao Chen · Xiyang Ji · Hongyang Qin · Minwen Deng · Siqin Li · Liang Wang · Weinan Zhang · Yong Yu · Liu Linc · Lanxiao Huang · Deheng Ye · Qiang Fu · Wei Yang -
2022 Poster: Reinforcement Learning with Automated Auxiliary Loss Search »
Tairan He · Yuge Zhang · Kan Ren · Minghuan Liu · Che Wang · Weinan Zhang · Yuqing Yang · Dongsheng Li -
2022 Poster: Bootstrapped Transformer for Offline Reinforcement Learning »
Kerong Wang · Hanye Zhao · Xufang Luo · Kan Ren · Weinan Zhang · Dongsheng Li -
2022 Poster: PerfectDou: Dominating DouDizhu with Perfect Information Distillation »
Guan Yang · Minghuan Liu · Weijun Hong · Weinan Zhang · Fei Fang · Guangjun Zeng · Yue Lin -
2022 Poster: NeoRL: A Near Real-World Benchmark for Offline Reinforcement Learning »
Rong-Jun Qin · Xingyuan Zhang · Songyi Gao · Xiong-Hui Chen · Zewen Li · Weinan Zhang · Yang Yu -
2022 Poster: Multi-Agent Reinforcement Learning is a Sequence Modeling Problem »
Muning Wen · Jakub Kuba · Runji Lin · Weinan Zhang · Ying Wen · Jun Wang · Yaodong Yang -
2021 Poster: Curriculum Offline Imitating Learning »
Minghuan Liu · Hanye Zhao · Zhengyu Yang · Jian Shen · Weinan Zhang · Li Zhao · Tie-Yan Liu -
2021 Poster: On Effective Scheduling of Model-based Reinforcement Learning »
Hang Lai · Jian Shen · Weinan Zhang · Yimin Huang · Xing Zhang · Ruiming Tang · Yong Yu · Zhenguo Li -
2020 Poster: Efficient Projection-free Algorithms for Saddle Point Problems »
Cheng Chen · Luo Luo · Weinan Zhang · Yong Yu -
2020 Poster: Trade-offs and Guarantees of Adversarial Representation Learning for Information Obfuscation »
Han Zhao · Jianfeng Chi · Yuan Tian · Geoffrey Gordon -
2020 Poster: Domain Adaptation with Conditional Distribution Matching and Generalized Label Shift »
Remi Tachet des Combes · Han Zhao · Yu-Xiang Wang · Geoffrey Gordon -
2020 Poster: Neural Methods for Point-wise Dependency Estimation »
Yao-Hung Hubert Tsai · Han Zhao · Makoto Yamada · Louis-Philippe Morency · Russ Salakhutdinov -
2020 Spotlight: Neural Methods for Point-wise Dependency Estimation »
Yao-Hung Hubert Tsai · Han Zhao · Makoto Yamada · Louis-Philippe Morency · Russ Salakhutdinov -
2017 Demonstration: MAgent: A Many-Agent Reinforcement Learning Research Platform for Artificial Collective Intelligence »
Lianmin Zheng · Jiacheng Yang · Han Cai · Weinan Zhang · Jun Wang · Yong Yu -
2008 Poster: Translated Learning »
Wenyuan Dai · Yuqiang Chen · Gui-Rong Xue · Qiang Yang · Yong Yu -
2008 Spotlight: Translated Learning »
Wenyuan Dai · Yuqiang Chen · Gui-Rong Xue · Qiang Yang · Yong Yu