Timezone: »
Probabilistic dynamics model ensemble is widely used in existing model-based reinforcement learning methods as it outperforms a single dynamics model in both asymptotic performance and sample efficiency. In this paper, we provide both practical and theoretical insights on the empirical success of the probabilistic dynamics model ensemble through the lens of Lipschitz continuity. We find that, for a value function, the stronger the Lipschitz condition is, the smaller the gap between the true dynamics- and learned dynamics-induced Bellman operators is, thus enabling the converged value function to be closer to the optimal value function. Hence, we hypothesize that the key functionality of the probabilistic dynamics model ensemble is to regularize the Lipschitz condition of the value function using generated samples. To validate this hypothesis, we devise two practical robust training mechanisms through computing the adversarial noise and regularizing the value network’s spectral norm to directly regularize the Lipschitz condition of the value functions. Empirical results show that combined with our mechanisms, model-based RL algorithms with a single dynamics model outperform those with ensemble of the probabilistic dynamics models. These findings not only support the theoretical insight, but also provide a practical solution for developing computationally efficient model-based RL algorithms.
Author Information
Ruijie Zheng (University of Maryland, College Park)
Xiyao Wang (Center for Research on Intelligent System and Engineering, Institute of Automation, CAS, University of Chinese Academy of Sciences)
Huazhe Xu (Tsinghua University)
Furong Huang (University of Maryland)
More from the Same Authors
-
2021 : Who Is the Strongest Enemy? Towards Optimal and Efficient Evasion Attacks in Deep RL »
Yanchao Sun · Ruijie Zheng · Yongyuan Liang · Furong Huang -
2021 : Efficiently Improving the Robustness of RL Agents against Strongest Adversaries »
Yongyuan Liang · Yanchao Sun · Ruijie Zheng · Furong Huang -
2021 : Transfer RL across Observation Feature Spaces via Model-Based Regularization »
Yanchao Sun · Ruijie Zheng · Xiyao Wang · Andrew Cohen · Furong Huang -
2021 : Who Is the Strongest Enemy? Towards Optimal and Efficient Evasion Attacks in Deep RL »
Yanchao Sun · Ruijie Zheng · Yongyuan Liang · Furong Huang -
2022 Poster: Pre-Trained Image Encoder for Generalizable Visual Reinforcement Learning »
Zhecheng Yuan · Zhengrong Xue · Bo Yuan · Xueqian Wang · YI WU · Yang Gao · Huazhe Xu -
2022 : SMART: Self-supervised Multi-task pretrAining with contRol Transformers »
Yanchao Sun · shuang ma · Ratnesh Madaan · Rogerio Bonatti · Furong Huang · Ashish Kapoor -
2022 : Posterior Coreset Construction with Kernelized Stein Discrepancy for Model-Based Reinforcement Learning »
Souradip Chakraborty · Amrit Bedi · Alec Koppel · Furong Huang · Pratap Tokekar · Dinesh Manocha -
2022 : GFairHint: Improving Individual Fairness for Graph Neural Networks via Fairness Hint »
Paiheng Xu · Yuhang Zhou · Bang An · Wei Ai · Furong Huang -
2022 : Controllable Attack and Improved Adversarial Training in Multi-Agent Reinforcement Learning »
Xiangyu Liu · Souradip Chakraborty · Furong Huang -
2022 : Sketch-GNN: Scalable Graph Neural Networks with Sublinear Training Complexity »
Mucong Ding · Tahseen Rabbani · Bang An · Evan Wang · Furong Huang -
2022 : Faster Hyperparameter Search on Graphs via Calibrated Dataset Condensation »
Mucong Ding · Xiaoyu Liu · Tahseen Rabbani · Furong Huang -
2022 : Scaling up and Stabilizing Differentiable Planning with Implicit Differentiation »
Linfeng Zhao · Huazhe Xu · Lawson Wong -
2022 : Simple Emergent Action Representations from Multi-Task Policy Training »
Pu Hua · Yubei Chen · Huazhe Xu -
2022 : DP-InstaHide: Data Augmentations Provably Enhance Guarantees Against Dataset Manipulations »
Eitan Borgnia · Jonas Geiping · Valeriia Cherepanova · Liam Fowl · Arjun Gupta · Amin Ghiasi · Furong Huang · Micah Goldblum · Tom Goldstein -
2023 Poster: CEIL: Generalized Contextual Imitation Learning »
Jinxin Liu · Li He · Yachen Kang · Zifeng Zhuang · Donglin Wang · Huazhe Xu -
2023 Poster: H-InDex: Visual Reinforcement Learning with Hand-Informed Representations for Dexterous Manipulation »
Yanjie Ze · Yuyao Liu · Ruizhe Shi · Jiaxin Qin · Zhecheng Yuan · Jiashun Wang · Xiaolong Wang · Huazhe Xu -
2023 Poster: Can Pre-Trained Text-to-Image Models Generate Visual Goals for Reinforcement Learning? »
Jialu Gao · Kaizhe Hu · Guowei Xu · Huazhe Xu -
2023 Poster: $\texttt{TACO}$: Temporal Latent Action-Driven Contrastive Loss for Visual Reinforcement Learning »
Ruijie Zheng · Xiyao Wang · Yanchao Sun · Shuang Ma · Jieyu Zhao · Huazhe Xu · Hal Daumé III · Furong Huang -
2023 Poster: Large-Scale Distributed Learning via Private On-Device LSH »
Tahseen Rabbani · Marco Bornstein · Furong Huang -
2023 Poster: MoVie: Visual Model-Based Policy Adaptation for View Generalization »
Sizhe Yang · Yanjie Ze · Huazhe Xu -
2023 Poster: Cold Diffusion: Inverting Arbitrary Image Transforms Without Noise »
Arpit Bansal · Eitan Borgnia · Hong-Min Chu · Jie Li · Hamid Kazemi · Furong Huang · Micah Goldblum · Jonas Geiping · Tom Goldstein -
2023 Poster: C-Disentanglement: Discovering Causally-Independent Generative Factors under an Inductive Bias of Confounder »
Xiaoyu Liu · Jiaxin Yuan · Bang An · Yuancheng Xu · Yifan Yang · Furong Huang -
2023 Poster: RL-ViGen: A Reinforcement Learning Benchmark for Visual Generalization »
Zhecheng Yuan · Sizhe Yang · Pu Hua · Can Chang · Kaizhe Hu · Xiaolong Wang · Huazhe Xu -
2022 : Contributed Talk: Controllable Attack and Improved Adversarial Training in Multi-Agent Reinforcement Learning »
Xiangyu Liu · Souradip Chakraborty · Furong Huang -
2022 Spotlight: Lightning Talks 5A-3 »
Minting Pan · Xiang Chen · Wenhan Huang · Can Chang · Zhecheng Yuan · Jianzhun Shao · Yushi Cao · Peihao Chen · Ke Xue · Zhengrong Xue · Zhiqiang Lou · Xiangming Zhu · Lei Li · Zhiming Li · Kai Li · Jiacheng Xu · Dongyu Ji · Ni Mu · Kun Shao · Tianpei Yang · Kunyang Lin · Ningyu Zhang · Yunbo Wang · Lei Yuan · Bo Yuan · Hongchang Zhang · Jiajun Wu · Tianze Zhou · Xueqian Wang · Ling Pan · Yuhang Jiang · Xiaokang Yang · Xiaozhuan Liang · Hao Zhang · Weiwen Hu · Miqing Li · YAN ZHENG · Matthew Taylor · Huazhe Xu · Shumin Deng · Chao Qian · YI WU · Shuncheng He · Wenbing Huang · Chuanqi Tan · Zongzhang Zhang · Yang Gao · Jun Luo · Yi Li · Xiangyang Ji · Thomas Li · Mingkui Tan · Fei Huang · Yang Yu · Huazhe Xu · Dongge Wang · Jianye Hao · Chuang Gan · Yang Liu · Luo Si · Hangyu Mao · Huajun Chen · Jianye Hao · Jun Wang · Xiaotie Deng -
2022 Spotlight: E-MAPP: Efficient Multi-Agent Reinforcement Learning with Parallel Program Guidance »
Can Chang · Ni Mu · Jiajun Wu · Ling Pan · Huazhe Xu -
2022 Spotlight: Pre-Trained Image Encoder for Generalizable Visual Reinforcement Learning »
Zhecheng Yuan · Zhengrong Xue · Bo Yuan · Xueqian Wang · YI WU · Yang Gao · Huazhe Xu -
2022 Spotlight: Adversarial Auto-Augment with Label Preservation: A Representation Learning Principle Guided Approach »
Kaiwen Yang · Yanchao Sun · Jiahao Su · Fengxiang He · Xinmei Tian · Furong Huang · Tianyi Zhou · Dacheng Tao -
2022 : SWIFT: Rapid Decentralized Federated Learning via Wait-Free Model Communication »
Marco Bornstein · Tahseen Rabbani · Evan Wang · Amrit Bedi · Furong Huang -
2022 Poster: Where do Models go Wrong? Parameter-Space Saliency Maps for Explainability »
Roman Levin · Manli Shu · Eitan Borgnia · Furong Huang · Micah Goldblum · Tom Goldstein -
2022 Poster: Sketch-GNN: Scalable Graph Neural Networks with Sublinear Training Complexity »
Mucong Ding · Tahseen Rabbani · Bang An · Evan Wang · Furong Huang -
2022 Poster: E-MAPP: Efficient Multi-Agent Reinforcement Learning with Parallel Program Guidance »
Can Chang · Ni Mu · Jiajun Wu · Ling Pan · Huazhe Xu -
2022 Poster: Efficient Adversarial Training without Attacking: Worst-Case-Aware Robust Reinforcement Learning »
Yongyuan Liang · Yanchao Sun · Ruijie Zheng · Furong Huang -
2022 Poster: End-to-end Algorithm Synthesis with Recurrent Networks: Extrapolation without Overthinking »
Arpit Bansal · Avi Schwarzschild · Eitan Borgnia · Zeyad Emam · Furong Huang · Micah Goldblum · Tom Goldstein -
2022 Poster: Adversarial Auto-Augment with Label Preservation: A Representation Learning Principle Guided Approach »
Kaiwen Yang · Yanchao Sun · Jiahao Su · Fengxiang He · Xinmei Tian · Furong Huang · Tianyi Zhou · Dacheng Tao -
2022 Poster: Transferring Fairness under Distribution Shifts via Fair Consistency Regularization »
Bang An · Zora Che · Mucong Ding · Furong Huang -
2021 : Who Is the Strongest Enemy? Towards Optimal and Efficient Evasion Attacks in Deep RL »
Yanchao Sun · Ruijie Zheng · Yongyuan Liang · Furong Huang -
2021 : Efficiently Improving the Robustness of RL Agents against Strongest Adversaries »
Yongyuan Liang · Yanchao Sun · Ruijie Zheng · Furong Huang