Timezone: »

Breaking the Sample Size Barrier in Model-Based Reinforcement Learning with a Generative Model
Gen Li · Yuting Wei · Yuejie Chi · Yuantao Gu · Yuxin Chen

Mon Dec 07 09:00 PM -- 11:00 PM (PST) @ Poster Session 0 #82
We investigate the sample efficiency of reinforcement learning in a $\gamma$-discounted infinite-horizon Markov decision process (MDP) with state space S and action space A, assuming access to a generative model. Despite a number of prior work tackling this problem, a complete picture of the trade-offs between sample complexity and statistical accuracy is yet to be determined. In particular, prior results suffer from a sample size barrier, in the sense that their claimed statistical guarantees hold only when the sample size exceeds at least $ |S| |A| / (1-\gamma)^2 $ (up to some log factor). The current paper overcomes this barrier by certifying the minimax optimality of model-based reinforcement learning as soon as the sample size exceeds the order of $ |S| |A| / (1-\gamma) $ (modulo some log factor). More specifically, a perturbed model-based planning algorithm provably finds an $\epsilon$-optimal policy with an order of $ |S| |A| / ((1-\gamma)^3\epsilon^2 ) $ samples (up to log factor) for any $0< \epsilon < 1/(1-\gamma)$. Along the way, we derive improved (instance-dependent) guarantees for model-based policy evaluation. To the best of our knowledge, this work provides the first minimax-optimal guarantee in a generative model that accommodates the entire range of sample sizes (beyond which finding a meaningful policy is information theoretically impossible).

Author Information

Gen Li (Tsinghua University)
Yuting Wei (Carnegie Mellon University)
Yuejie Chi (CMU)
Yuantao Gu (Tsinghua University)
Yuxin Chen (Princeton University)

More from the Same Authors