Skip to yearly menu bar Skip to main content


Poster

Warm-up Free Policy Optimization: Improved Regret in Linear Markov Decision Processes

Asaf Cassel · Aviv Rosenberg

[ ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Policy Optimization (PO) methods are among the most popular Reinforcement Learning (RL) algorithms in practice. Recently, Sherman et al. [2023a] proposed a PO-based algorithm with rate-optimal regret guarantees under the linear Markov Decision Process (MDP) model. However, their algorithm relies on a costly pure exploration warm-up phase that is hard to implement in practice. This paper eliminates this undesired warm-up phase, replacing it with a simple and efficient contraction mechanism. Our PO algorithm achieves rate-optimal regret with improved dependence on the other parameters of the problem (horizon and function approximation dimension) in two fundamental settings: adversarial losses with full-information feedback and stochastic losses with bandit feedback.

Live content is unavailable. Log in and register to view live content