Skip to yearly menu bar Skip to main content


Poster

Online Linear Regression and Its Application to Model-Based Reinforcement Learning

Alexander L Strehl · Michael L Littman


Abstract:

We provide a provably efficient algorithm for learning Markov Decision Processes (MDPs) with continuous state and action spaces in the online setting. Specifically, we take a model-based approach and show that a special type of online linear regression allows us to learn MDPs with (possibly kernalized) linearly parameterized dynamics. This result builds on Kearns and Singh's work that provides a provably efficient algorithm for finite state MDPs. Our approach is not restricted to the linear setting, and is applicable to other classes of continuous MDPs.

Live content is unavailable. Log in and register to view live content