Skip to yearly menu bar Skip to main content


Poster

Planning with Goal-Conditioned Policies

Soroush Nasiriany · Vitchyr Pong · Steven Lin · Sergey Levine

East Exhibition Hall B, C #218

Keywords: [ Reinforcement Learning and Planning ] [ Reinforcement Learning ] [ Reinforcement Learning and Planning -> Planning; Reinforcement Learning and Planning ]


Abstract:

Planning methods can solve temporally extended sequential decision making problems by composing simple behaviors. However, planning requires suitable abstractions for the states and transitions, which typically need to be designed by hand. In contrast, reinforcement learning (RL) can acquire behaviors from low-level inputs directly, but struggles with temporally extended tasks. Can we utilize reinforcement learning to automatically form the abstractions needed for planning, thus obtaining the best of both approaches? We show that goal-conditioned policies learned with RL can be incorporated into planning, such that a planner can focus on which states to reach, rather than how those states are reached. However, with complex state observations such as images, not all inputs represent valid states. We therefore also propose using a latent variable model to compactly represent the set of valid states for the planner, such that the policies provide an abstraction of actions, and the latent variable model provides an abstraction of states. We compare our method with planning-based and model-free methods and find that our method significantly outperforms prior work when evaluated on image-based tasks that require non-greedy, multi-staged behavior.

Live content is unavailable. Log in and register to view live content