Poster
Q-learning with Nearest Neighbors
Devavrat Shah · Qiaomin Xie
Room 517 AB #119
Keywords: [ Learning Theory ] [ Reinforcement Learning ]
[
Abstract
]
Abstract:
We consider model-free reinforcement learning for infinite-horizon discounted Markov Decision Processes (MDPs) with a continuous state space and unknown transition kernel, when only a single sample path under an arbitrary policy of the system is available. We consider the Nearest Neighbor Q-Learning (NNQL) algorithm to learn the optimal Q function using nearest neighbor regression method. As the main contribution, we provide tight finite sample analysis of the convergence rate. In particular, for MDPs with a -dimensional state space and the discounted factor , given an arbitrary sample path with covering time'' , we establish that the algorithm is guaranteed to output an -accurate estimate of the optimal Q-function using samples. For instance, for a well-behaved MDP, the covering time of the sample path under the purely random policy scales as so the sample complexity scales as Indeed, we establish a lower bound that argues that the dependence of is necessary.
Live content is unavailable. Log in and register to view live content