Poster
Sample-Efficient Reinforcement Learning for Linearly-Parameterized MDPs with a Generative Model
Bingyan Wang · Yuling Yan · Jianqing Fan
Keywords: [ Generative Model ] [ Theory ] [ Reinforcement Learning and Planning ]
Abstract:
The curse of dimensionality is a widely known issue in reinforcement learning (RL). In the tabular setting where the state space and the action space are both finite, to obtain a near optimal policy with sampling access to a generative model, the minimax optimal sample complexity scales linearly with , which can be prohibitively large when or is large. This paper considers a Markov decision process (MDP) that admits a set of state-action features, which can linearly express (or approximate) its probability transition kernel. We show that a model-based approach (resp.Q-learning) provably learns an -optimal policy (resp.Q-function) with high probability as soon as the sample size exceeds the order of (resp.), up to some logarithmic factor. Here is the feature dimension and is the discount factor of the MDP. Both sample complexity bounds are provably tight, and our result for the model-based approach matches the minimax lower bound. Our results show that for arbitrarily large-scale MDP, both the model-based approach and Q-learning are sample-efficient when is relatively small, and hence the title of this paper.
Chat is not available.