`

Timezone: »

 
Poster
Discovery of Options via Meta-Learned Subgoals
Vivek Veeriah · Tom Zahavy · Matteo Hessel · Zhongwen Xu · Junhyuk Oh · Iurii Kemaev · Hado van Hasselt · David Silver · Satinder Singh

Tue Dec 07 04:30 PM -- 06:00 PM (PST) @ None #None

Temporal abstractions in the form of options have been shown to help reinforcement learning (RL) agents learn faster. However, despite prior work on this topic, the problem of discovering options through interaction with an environment remains a challenge. In this paper, we introduce a novel meta-gradient approach for discovering useful options in multi-task RL environments. Our approach is based on a manager-worker decomposition of the RL agent, in which a manager maximises rewards from the environment by learning a task-dependent policy over both a set of task-independent discovered-options and primitive actions. The option-reward and termination functions that define a subgoal for each option are parameterised as neural networks and trained via meta-gradients to maximise their usefulness. Empirical analysis on gridworld and DeepMind Lab tasks show that: (1) our approach can discover meaningful and diverse temporally-extended options in multi-task RL domains, (2) the discovered options are frequently used by the agent while learning to solve the training tasks, and (3) that the discovered options help a randomly initialised manager learn faster in completely new tasks.

Author Information

Vivek Veeriah (University of Michigan)
Tom Zahavy (Deepmind)
Matteo Hessel (Google DeepMind)
Zhongwen Xu (DeepMind)
Junhyuk Oh (DeepMind)
Iurii Kemaev (DeepMind)
Hado van Hasselt (DeepMind)
David Silver (DeepMind)
Satinder Singh (DeepMind)

More from the Same Authors