Timezone: »
Meta-reinforcement learning (meta-RL) algorithms allow for agents to learn new behaviors from small amounts of experience, mitigating the sample inefficiency problem in RL. However, while meta-RL agents can adapt quickly to new tasks at test time after experiencing only a few trajectories, the meta-training process is still sample-inefficient. Prior works have found that in the multi-task RL setting, relabeling past transitions and thus sharing experience among tasks can improve sample efficiency and asymptotic performance. We apply this idea to the meta-RL setting and devise a new relabeling method called Hindsight Foresight Relabeling (HFR). We construct a relabeling distribution using the combination of "hindsight", which is used to relabel trajectories using reward functions from the training task distribution, and "foresight", which takes the relabeled trajectories and computes the utility of each trajectory for each task. HFR is easy to implement and readily compatible with existing meta-RL algorithms. We find that HFR improves performance when compared to other relabeling methods on a variety of meta-RL tasks.
Author Information
Michael Wan (University of Illinois, Urbana Champaign)
Jian Peng (University of Illinois at Urbana-Champaign)
Tanmay Gangwani (University of Illinois, Urbana-Champaign)
I am a Ph.D. student in Computer Science at the University of Illinois, Urbana Champaign, supervised by Jian Peng. I'm interested in machine learning, especially Reinforcement Learning. My research is mainly focused on designing algorithms which efficiently leverage expert demonstrations for RL (imitation learning), address the exploration challenge in complex environment, and use generative modeling methods for model-based RL. For details, please visit https://tgangwani.github.io
More from the Same Authors
-
2021 : Imitation Learning from Observations under Transition Model Disparity »
Tanmay Gangwani · Yuan Zhou · Jian Peng -
2022 Poster: Efficient Meta Reinforcement Learning for Preference-based Fast Adaptation »
Zhizhou Ren · Anji Liu · Yitao Liang · Jian Peng · Jianzhu Ma -
2022 Poster: Antigen-Specific Antibody Design and Optimization with Diffusion-Based Generative Models for Protein Structures »
Shitong Luo · Yufeng Su · Xingang Peng · Sheng Wang · Jian Peng · Jianzhu Ma -
2023 Poster: Equivariant Neural Operator Learning with Graphon Convolution »
Chaoran Cheng · Jian Peng -
2023 Poster: LinkerNet: Fragment Poses and Linker Co-Design with 3D Equivariant Diffusion »
Jiaqi Guan · Xingang Peng · PeiQi Jiang · Yunan Luo · Jian Peng · Jianzhu Ma -
2021 Poster: A 3D Generative Model for Structure-Based Drug Design »
Shitong Luo · Jiaqi Guan · Jianzhu Ma · Jian Peng -
2020 Poster: Learning Guidance Rewards with Trajectory-space Smoothing »
Tanmay Gangwani · Yuan Zhou · Jian Peng -
2020 Poster: Off-Policy Interval Estimation with Lipschitz Value Iteration »
Ziyang Tang · Yihao Feng · Na Zhang · Jian Peng · Qiang Liu -
2019 Poster: Thresholding Bandit with Optimal Aggregate Regret »
Chao Tao · Saúl Blanco · Jian Peng · Yuan Zhou -
2019 Poster: Exploration via Hindsight Goal Generation »
Zhizhou Ren · Kefan Dong · Yuan Zhou · Qiang Liu · Jian Peng