Poster
Near-optimal Reinforcement Learning in Factored MDPs
Ian Osband · Benjamin Van Roy
Level 2, room 210D
[
Abstract
]
Abstract:
Any reinforcement learning algorithm that applies to all Markov decision processes (MDPs) will suffer regret on some MDP, where is the elapsed time and and are the cardinalities of the state and action spaces. This implies time to guarantee a near-optimal policy. In many settings of practical interest, due to the curse of dimensionality, and can be so enormous that this learning time is unacceptable. We establish that, if the system is known to be a \emph{factored} MDP, it is possible to achieve regret that scales polynomially in the number of \emph{parameters} encoding the factored MDP, which may be exponentially smaller than or . We provide two algorithms that satisfy near-optimal regret bounds in this context: posterior sampling reinforcement learning (PSRL) and an upper confidence bound algorithm (UCRL-Factored).
Live content is unavailable. Log in and register to view live content