Timezone: »
A major bottleneck for applying deep reinforcement learning to real-world problems is its sample inefficiency, particularly when training policies from high-dimensional inputs such as images. A number of recent works use unsupervised representation learning approaches to improve sample efficiency. Yet, such unsupervised approaches are fundamentally unable to distinguish between task-relevant and irrelevant information. Thus, in visually complex scenes they learn representations that model lots of task-irrelevant details and hence lead to slower downstream task learning. Our insight: to determine which parts of the scene are important and should be modeled, we can exploit task information, such as rewards or demonstrations, from previous tasks. To this end, we formalize the problem of task-induced representation learning (TARP), which aims to leverage such task information in offline experience from prior tasks for learning compact representations that focus on modelling only task-relevant aspects. Through a series of experiments in visually complex environments we compare different approaches for leveraging task information within the TARP framework with prior unsupervised representation learning techniques and (1) find that task-induced representations allow for more sample efficient learning of unseen tasks and (2) formulate a set of best-practices for task-induced representation learning.
Author Information
Jun Yamada (University of Oxford)
Karl Pertsch (University of Southern California)
Anisha Gunjal (University of Southern California)
Joseph Lim (MIT)
More from the Same Authors
-
2021 : Skill-based Meta-Reinforcement Learning »
Taewook Nam · Shao-Hua Sun · Karl Pertsch · Sung Ju Hwang · Joseph Lim -
2021 : Skill-based Meta-Reinforcement Learning »
Taewook Nam · Shao-Hua Sun · Karl Pertsch · Sung Ju Hwang · Joseph Lim -
2022 : SPRINT: Scalable Semantic Policy Pre-training via Language Instruction Relabeling »
Jesse Zhang · Karl Pertsch · Jiahui Zhang · Taewook Nam · Sung Ju Hwang · Xiang Ren · Joseph Lim -
2022 : SPRINT: Scalable Semantic Policy Pre-training via Language Instruction Relabeling »
Jesse Zhang · Karl Pertsch · Jiahui Zhang · Taewook Nam · Sung Ju Hwang · Xiang Ren · Joseph Lim -
2021 Poster: Learning to Synthesize Programs as Interpretable and Generalizable Policies »
Dweep Trivedi · Jesse Zhang · Shao-Hua Sun · Joseph Lim -
2021 Poster: Generalizable Imitation Learning from Observation via Inferring Goal Proximity »
Youngwoon Lee · Andrew Szot · Shao-Hua Sun · Joseph Lim -
2020 : Contributed Talk: Accelerating Reinforcement Learning with Learned Skill Priors »
Karl Pertsch · Youngwoon Lee · Joseph Lim -
2020 : Contributed Talk 1 - "Accelerating Reinforcement Learning with Learned Skill Priors" (Best Paper Runner-Up) »
Karl Pertsch -
2020 Poster: Long-Horizon Visual Planning with Goal-Conditioned Hierarchical Predictors »
Karl Pertsch · Oleh Rybkin · Frederik Ebert · Shenghao Zhou · Dinesh Jayaraman · Chelsea Finn · Sergey Levine -
2016 : Knowledge Acquisition for Visual Question Answering via Iterative Querying »
Yuke Zhu · Joseph Lim · Li Fei-Fei -
2016 Workshop: 3D Deep Learning »
Fisher Yu · Joseph Lim · Matthew D Fisher · Qixing Huang · Jianxiong Xiao -
2015 Poster: Galileo: Perceiving Physical Object Properties by Integrating a Physics Engine with Deep Learning »
Jiajun Wu · Ilker Yildirim · Joseph Lim · Bill Freeman · Josh Tenenbaum -
2011 Poster: Transfer Learning by Borrowing Examples »
Joseph Lim · Russ Salakhutdinov · Antonio Torralba