Timezone: »

Planning with Goal-Conditioned Policies
Soroush Nasiriany · Vitchyr Pong · Steven Lin · Sergey Levine

Thu Dec 10:45 AM -- 12:45 PM PST @ East Exhibition Hall B + C #218

Planning methods can solve temporally extended sequential decision making problems by composing simple behaviors. However, planning requires suitable abstractions for the states and transitions, which typically need to be designed by hand. In contrast, reinforcement learning (RL) can acquire behaviors from low-level inputs directly, but struggles with temporally extended tasks. Can we utilize reinforcement learning to automatically form the abstractions needed for planning, thus obtaining the best of both approaches? We show that goal-conditioned policies learned with RL can be incorporated into planning, such that a planner can focus on which states to reach, rather than how those states are reached. However, with complex state observations such as images, not all inputs represent valid states. We therefore also propose using a latent variable model to compactly represent the set of valid states for the planner, such that the policies provide an abstraction of actions, and the latent variable model provides an abstraction of states. We compare our method with planning-based and model-free methods and find that our method significantly outperforms prior work when evaluated on image-based tasks that require non-greedy, multi-staged behavior.

Author Information

Soroush Nasiriany (UC Berkeley)
Vitchyr Pong (UC Berkeley)
Steven Lin (UC Berkeley)
Sergey Levine (UC Berkeley)

More from the Same Authors