Skip to yearly menu bar Skip to main content


Poster

Continual Learning of Control Primitives : Skill Discovery via Reset-Games

Kelvin Xu · Siddharth Verma · Chelsea Finn · Sergey Levine

Poster Session 5 #607

Abstract:

Reinforcement learning has the potential to automate the acquisition of behavior in complex settings, but in order for it to be successfully deployed, a number of practical challenges must be addressed. First, in real world settings, when an agent attempts a tasks and fails, the environment must somehow "reset" so that the agent can attempt the task again. While easy in simulation, this could require considerable human effort in the real world, especially if the number of trials is very large. Second, real world learning is often limited by challenges in exploration, as complex, temporally extended behavior is often times difficult to acquire with random exploration. In this work, we show how a single method can allow an agent to acquire skills with minimal supervision while removing the need for resets. We do this by exploiting the insight that the need to reset" an agent to a broad set of initial states for a learning task provides a natural setting to learn a diverse set ofreset-skills." We propose a general-sum game formulation that naturally balances the objective of resetting and learning skills, and demonstrate that this approach improves performance on reset-free tasks, and additionally show that the skills we obtain can be used to significantly accelerate downstream learning.

Chat is not available.