Poster

Interesting Object, Curious Agent: Learning Task-Agnostic Exploration

Simone Parisi · Victoria Dean · Deepak Pathak · Abhinav Gupta

Keywords: [ Reinforcement Learning and Planning ] [ Continual Learning ]

[ Abstract ]
Thu 9 Dec 8:30 a.m. PST — 10 a.m. PST
 
Oral presentation: Oral Session 5: Reinforcement Learning and Planning
Fri 10 Dec 4 p.m. PST — 5 p.m. PST

Abstract:

Common approaches for task-agnostic exploration learn tabula-rasa --the agent assumes isolated environments and no prior knowledge or experience. However, in the real world, agents learn in many environments and always come with prior experiences as they explore new ones. Exploration is a lifelong process. In this paper, we propose a paradigm change in the formulation and evaluation of task-agnostic exploration. In this setup, the agent first learns to explore across many environments without any extrinsic goal in a task-agnostic manner.Later on, the agent effectively transfers the learned exploration policy to better explore new environments when solving tasks. In this context, we evaluate several baseline exploration strategies and present a simple yet effective approach to learning task-agnostic exploration policies. Our key idea is that there are two components of exploration: (1) an agent-centric component encouraging exploration of unseen parts of the environment based on an agent’s belief; (2) an environment-centric component encouraging exploration of inherently interesting objects. We show that our formulation is effective and provides the most consistent exploration across several training-testing environment pairs. We also introduce benchmarks and metrics for evaluating task-agnostic exploration strategies. The source code is available at https://github.com/sparisi/cbet/.

Chat is not available.