Poster
Near-Optimal Time and Sample Complexities for Solving Markov Decision Processes with a Generative Model
Aaron Sidford · Mengdi Wang · Xian Wu · Lin Yang · Yinyu Ye
Room 517 AB #168
Keywords: [ Decision and Control ] [ Reinforcement Learning ] [ Markov Decision Processes ] [ Exploration ] [ Planning ]
[
Abstract
]
Abstract:
In this paper we consider the problem of computing an ϵ-optimal policy of a discounted Markov Decision Process (DMDP) provided we can only access its transition function through a generative sampling model that given any state-action pair samples from the transition function in O(1) time. Given such a DMDP with states \states, actions \actions, discount factor γ∈(0,1), and rewards in range [0,1] we provide an algorithm which computes an ϵ-optimal policy with probability 1−δ where {\it both} the run time spent and number of sample taken is upper bounded by
O[|\cS||\cA|(1−γ)3ϵ2log(|\cS||\cA|(1−γ)δϵ)log(1(1−γ)ϵ)] .
For fixed values of ϵ, this improves upon the previous best known bounds by a factor of (1−γ)−1 and matches the sample complexity lower bounds proved in \cite{azar2013minimax} up to logarithmic factors.
We also extend our method to computing ϵ-optimal policies for finite-horizon MDP with a generative model and provide a nearly matching sample complexity lower bound.
Live content is unavailable. Log in and register to view live content