Skip to yearly menu bar Skip to main content


Poster

When to use parametric models in reinforcement learning?

Hado van Hasselt · Matteo Hessel · John Aslanides

East Exhibition Hall B, C #193

Keywords: [ Reinforcement Learning and Planning ] [ Reinforcement Learning and Planning -> Model-Based RL; Reinforcement Learning and Planning ] [ Planning; Reinforcement Learning ]


Abstract:

We examine the question of when and how parametric models are most useful in reinforcement learning. In particular, we look at commonalities and differences between parametric models and experience replay. Replay-based learning algorithms share important traits with model-based approaches, including the ability to plan: to use more computation without additional data to improve predictions and behaviour. We discuss when to expect benefits from either approach, and interpret prior work in this context. We hypothesise that, under suitable conditions, replay-based algorithms should be competitive to or better than model-based algorithms if the model is used only to generate fictional transitions from observed states for an update rule that is otherwise model-free. We validated this hypothesis on Atari 2600 video games. The replay-based algorithm attained state-of-the-art data efficiency, improving over prior results with parametric models. Additionally, we discuss different ways to use models. We show that it can be better to plan backward than to plan forward when using models to perform credit assignment (e.g., to directly learn a value or policy), even though the latter seems more common. Finally, we argue and demonstrate that it can be beneficial to plan forward for immediate behaviour, rather than for credit assignment.

Live content is unavailable. Log in and register to view live content