Timezone: »
In many reinforcement learning (RL) applications, the observation space is specified by human developers and restricted by physical realizations, and may thus be subject to dramatic changes over time (e.g. increased number of observable features). However, when the observation space changes, the previous policy usually fails due to the mismatch of input features, and therefore one has to train another policy from scratch, which is computationally and sample inefficient. In this paper, we propose a novel algorithm that extracts the latent-space dynamics in the source task, and transfers the dynamics model to the target task with a model-based regularizer. Theoretical analysis shows that the transferred dynamics model helps with representation learning in the target task. Our algorithm works for drastic changes of observation space (e.g. from vector-based observation to image-based observation), without any inter-task mapping or any prior knowledge of the target task. Empirical results have justified that our algorithm significantly improves the efficiency and stability of learning in the target task.
Author Information
Yanchao Sun (University of Maryland, College Park)
Ruijie Zheng (University of Maryland, College Park)
Xiyao Wang (Center for Research on Intelligent System and Engineering, Institute of Automation, CAS, University of Chinese Academy of Sciences)
Andrew Cohen (Unity Technologies)
Furong Huang (University of Maryland)
Furong Huang is an assistant professor of computer science. Huang’s research focuses on machine learning, high-dimensional statistics and distributed algorithms—both the theoretical analysis and practical implementation of parallel spectral methods for latent variable graphical models. Some applications of her research include developing fast detection algorithms to discover hidden and overlapping user communities in social networks, learning convolutional sparse coding models for understanding semantic meanings of sentences and object recognition in images, healthcare analytics by learning a hierarchy on human diseases for guiding doctors to identify potential diseases afflicting patients, and more. Huang recently completed a postdoctoral position at Microsoft Research in New York.
More from the Same Authors
-
2021 : Who Is the Strongest Enemy? Towards Optimal and Efficient Evasion Attacks in Deep RL »
Yanchao Sun · Ruijie Zheng · Yongyuan Liang · Furong Huang -
2021 : Efficiently Improving the Robustness of RL Agents against Strongest Adversaries »
Yongyuan Liang · Yanchao Sun · Ruijie Zheng · Furong Huang -
2021 : A Closer Look at Distribution Shifts and Out-of-Distribution Generalization on Graphs »
Mucong Ding · Kezhi Kong · Jiuhai Chen · John Kirchenbauer · Micah Goldblum · David P Wipf · Furong Huang · Tom Goldstein -
2021 : Who Is the Strongest Enemy? Towards Optimal and Efficient Evasion Attacks in Deep RL »
Yanchao Sun · Ruijie Zheng · Yongyuan Liang · Furong Huang -
2022 : SMART: Self-supervised Multi-task pretrAining with contRol Transformers »
Yanchao Sun · shuang ma · Ratnesh Madaan · Rogerio Bonatti · Furong Huang · Ashish Kapoor -
2022 : Is Model Ensemble Necessary? Model-based RL via a Single Model with Lipschitz Regularized Value Function »
Ruijie Zheng · Xiyao Wang · Huazhe Xu · Furong Huang -
2022 Spotlight: Adversarial Auto-Augment with Label Preservation: A Representation Learning Principle Guided Approach »
Kaiwen Yang · Yanchao Sun · Jiahao Su · Fengxiang He · Xinmei Tian · Furong Huang · Tianyi Zhou · Dacheng Tao -
2022 Poster: Distributional Reward Estimation for Effective Multi-agent Deep Reinforcement Learning »
Jifeng Hu · Yanchao Sun · Hechang Chen · Sili Huang · haiyin piao · Yi Chang · Lichao Sun -
2022 Poster: Efficient Adversarial Training without Attacking: Worst-Case-Aware Robust Reinforcement Learning »
Yongyuan Liang · Yanchao Sun · Ruijie Zheng · Furong Huang -
2022 Poster: Adversarial Auto-Augment with Label Preservation: A Representation Learning Principle Guided Approach »
Kaiwen Yang · Yanchao Sun · Jiahao Su · Fengxiang He · Xinmei Tian · Furong Huang · Tianyi Zhou · Dacheng Tao -
2021 : Who Is the Strongest Enemy? Towards Optimal and Efficient Evasion Attacks in Deep RL »
Yanchao Sun · Ruijie Zheng · Yongyuan Liang · Furong Huang -
2021 : A Closer Look at Distribution Shifts and Out-of-Distribution Generalization on Graphs »
Mucong Ding · Kezhi Kong · Jiuhai Chen · John Kirchenbauer · Micah Goldblum · David P Wipf · Furong Huang · Tom Goldstein -
2021 : Efficiently Improving the Robustness of RL Agents against Strongest Adversaries »
Yongyuan Liang · Yanchao Sun · Ruijie Zheng · Furong Huang -
2021 Poster: Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks »
Avi Schwarzschild · Eitan Borgnia · Arjun Gupta · Furong Huang · Uzi Vishkin · Micah Goldblum · Tom Goldstein -
2021 Poster: VQ-GNN: A Universal Framework to Scale up Graph Neural Networks using Vector Quantization »
Mucong Ding · Kezhi Kong · Jingling Li · Chen Zhu · John Dickerson · Furong Huang · Tom Goldstein -
2021 Poster: Understanding the Generalization Benefit of Model Invariance from a Data Perspective »
Sicheng Zhu · Bang An · Furong Huang -
2020 Poster: Convolutional Tensor-Train LSTM for Spatio-Temporal Learning »
Jiahao Su · Wonmin Byeon · Jean Kossaifi · Furong Huang · Jan Kautz · Anima Anandkumar -
2020 Poster: ARMA Nets: Expanding Receptive Field for Dense Prediction »
Jiahao Su · Shiqi Wang · Furong Huang -
2015 : Spotlight »
Furong Huang · William Gray Roncal · Tom Goldstein -
2015 : Convolutional Dictionary Learning through Tensor Factorization »
Furong Huang -
2012 Poster: Learning Mixtures of Tree Graphical Models »
Anima Anandkumar · Daniel Hsu · Furong Huang · Sham M Kakade