Timezone: »
Learning to perform tasks by leveraging a dataset of expert observations, also known as imitation learning from observations (ILO), is an important paradigm for learning skills without access to the expert reward function or the expert actions. We consider ILO in the setting where the expert and the learner agents operate in different environments, with the source of the discrepancy being the transition dynamics model. Recent methods for scalable ILO utilize adversarial learning to match the state-transition distributions of the expert and the learner, an approach that becomes challenging when the dynamics are dissimilar. In this work, we propose an algorithm that trains an intermediary policy in the learner environment and uses it as a surrogate expert for the learner. The intermediary policy is learned such that the state transitions generated by it are close to the state transitions in the expert dataset. To derive a practical and scalable algorithm, we employ concepts from prior work on estimating the support of a probability distribution. Experiments using MuJoCo locomotion tasks highlight that our method compares favorably to the baselines for ILO with transition dynamics mismatch.
Author Information
Tanmay Gangwani (University of Illinois, Urbana-Champaign)
I am a Ph.D. student in Computer Science at the University of Illinois, Urbana Champaign, supervised by Jian Peng. I'm interested in machine learning, especially Reinforcement Learning. My research is mainly focused on designing algorithms which efficiently leverage expert demonstrations for RL (imitation learning), address the exploration challenge in complex environment, and use generative modeling methods for model-based RL. For details, please visit https://tgangwani.github.io
Yuan Zhou (UIUC)
Jian Peng (University of Illinois at Urbana-Champaign)
More from the Same Authors
-
2021 : Hindsight Foresight Relabeling for Meta-Reinforcement Learning »
Michael Wan · Jian Peng · Tanmay Gangwani -
2022 Poster: Efficient Meta Reinforcement Learning for Preference-based Fast Adaptation »
Zhizhou Ren · Anji Liu · Yitao Liang · Jian Peng · Jianzhu Ma -
2022 Poster: Antigen-Specific Antibody Design and Optimization with Diffusion-Based Generative Models for Protein Structures »
Shitong Luo · Yufeng Su · Xingang Peng · Sheng Wang · Jian Peng · Jianzhu Ma -
2023 Poster: Equivariant Neural Operator Learning with Graphon Convolution »
Chaoran Cheng · Jian Peng -
2023 Poster: LinkerNet: Fragment Poses and Linker Co-Design with 3D Equivariant Diffusion »
Jiaqi Guan · Xingang Peng · PeiQi Jiang · Yunan Luo · Jian Peng · Jianzhu Ma -
2021 Poster: A 3D Generative Model for Structure-Based Drug Design »
Shitong Luo · Jiaqi Guan · Jianzhu Ma · Jian Peng -
2020 Poster: Almost Optimal Model-Free Reinforcement Learningvia Reference-Advantage Decomposition »
Zihan Zhang · Yuan Zhou · Xiangyang Ji -
2020 Poster: Learning Guidance Rewards with Trajectory-space Smoothing »
Tanmay Gangwani · Yuan Zhou · Jian Peng -
2020 Poster: Off-Policy Interval Estimation with Lipschitz Value Iteration »
Ziyang Tang · Yihao Feng · Na Zhang · Jian Peng · Qiang Liu -
2019 Poster: Thresholding Bandit with Optimal Aggregate Regret »
Chao Tao · Saúl Blanco · Jian Peng · Yuan Zhou -
2019 Poster: Exploration via Hindsight Goal Generation »
Zhizhou Ren · Kefan Dong · Yuan Zhou · Qiang Liu · Jian Peng