Poster
Near-Optimal Time and Sample Complexities for Solving Markov Decision Processes with a Generative Model
Aaron Sidford · Mengdi Wang · Xian Wu · Lin Yang · Yinyu Ye
Room 517 AB #168
Keywords: [ Decision and Control ] [ Reinforcement Learning ] [ Markov Decision Processes ] [ Exploration ] [ Planning ]
[
Abstract
]
Abstract:
In this paper we consider the problem of computing an -optimal policy of a discounted Markov Decision Process (DMDP) provided we can only access its transition function through a generative sampling model that given any state-action pair samples from the transition function in time. Given such a DMDP with states , actions , discount factor , and rewards in range we provide an algorithm which computes an -optimal policy with probability where {\it both} the run time spent and number of sample taken is upper bounded by
For fixed values of , this improves upon the previous best known bounds by a factor of and matches the sample complexity lower bounds proved in \cite{azar2013minimax} up to logarithmic factors.
We also extend our method to computing -optimal policies for finite-horizon MDP with a generative model and provide a nearly matching sample complexity lower bound.
Live content is unavailable. Log in and register to view live content