Skip to yearly menu bar Skip to main content


Poster

Explicit Explore-Exploit Algorithms in Continuous State Spaces

Mikael Henaff

East Exhibition Hall B + C #187

Keywords: [ Deep Learning; Reinforcement Learning and Planning -> Exploration; Reinforcement Learning and Planning ] [ Model-Based RL ] [ Reinforcement Learning and Planning ]


Abstract:

We present a new model-based algorithm for reinforcement learning (RL) which consists of explicit exploration and exploitation phases, and is applicable in large or infinite state spaces. The algorithm maintains a set of dynamics models consistent with current experience and explores by finding policies which induce high dis- agreement between their state predictions. It then exploits using the refined set of models or experience gathered during exploration. We show that under realizability and optimal planning assumptions, our algorithm provably finds a near-optimal policy with a number of samples that is polynomial in a structural complexity measure which we show to be low in several natural settings. We then give a practical approximation using neural networks and demonstrate its performance and sample efficiency in practice.

Live content is unavailable. Log in and register to view live content