Poster
Improved Regret Analysis for Variance-Adaptive Linear Bandits and Horizon-Free Linear Mixture MDPs
Yeoneung Kim · Insoon Yang · Kwang-Sung Jun
Hall J (level 1) #826
Keywords: [ MDP ] [ linear bandits ]
Abstract:
In online learning problems, exploiting low variance plays an important role in obtaining tight performance guarantees yet is challenging because variances are often not known a priori. Recently, considerable progress has been made by Zhang et al. (2021) where they obtain a variance-adaptive regret bound for linear bandits without knowledge of the variances and a horizon-free regret bound for linear mixture Markov decision processes (MDPs). In this paper, we present novel analyses that improve their regret bounds significantly. For linear bandits, we achieve ~O(min{d√K,d1.5√∑Kk=1σ2k}+d2) where d is the dimension of the features, K is the time horizon, and σ2k is the noise variance at time step k, and ~O ignores polylogarithmic dependence, which is a factor of d3 improvement. For linear mixture MDPs with the assumption of maximum cumulative reward in an episode being in [0,1], we achieve a horizon-free regret bound of ~O(d√K+d2) where d is the number of base models and K is the number of episodes. This is a factor of d3.5 improvement in the leading term and d7 in the lower order term. Our analysis critically relies on a novel peeling-based regret analysis that leverages the elliptical potential `count' lemma.
Chat is not available.