Timezone: »

Task-Induced Representation Learning
Jun Yamada · Karl Pertsch · Anisha Gunjal · Joseph Lim
Event URL: https://openreview.net/forum?id=kR0Tv4iEPu »

A major bottleneck for applying deep reinforcement learning to real-world problems is its sample inefficiency, particularly when training policies from high-dimensional inputs such as images. A number of recent works use unsupervised representation learning approaches to improve sample efficiency. Yet, such unsupervised approaches are fundamentally unable to distinguish between task-relevant and irrelevant information. Thus, in visually complex scenes they learn representations that model lots of task-irrelevant details and hence lead to slower downstream task learning. Our insight: to determine which parts of the scene are important and should be modeled, we can exploit task information, such as rewards or demonstrations, from previous tasks. To this end, we formalize the problem of task-induced representation learning (TARP), which aims to leverage such task information in offline experience from prior tasks for learning compact representations that focus on modelling only task-relevant aspects. Through a series of experiments in visually complex environments we compare different approaches for leveraging task information within the TARP framework with prior unsupervised representation learning techniques and (1) find that task-induced representations allow for more sample efficient learning of unseen tasks and (2) formulate a set of best-practices for task-induced representation learning.

Author Information

Jun Yamada (University of Oxford)
Karl Pertsch (University of Southern California)
Anisha Gunjal (University of Southern California)
Joseph Lim (MIT)

More from the Same Authors