Skip to yearly menu bar Skip to main content


Poster

Tracking the Best Expert in Non-stationary Stochastic Environments

Chen-Yu Wei · Yi-Te Hong · Chi-Jen Lu

Area 5+6+7+8 #141

Keywords: [ Online Learning ]


Abstract: We study the dynamic regret of multi-armed bandit and experts problem in non-stationary stochastic environments. We introduce a new parameter \W, which measures the total statistical variance of the loss distributions over T rounds of the process, and study how this amount affects the regret. We investigate the interaction between \W and Γ, which counts the number of times the distributions change, as well as \W and V, which measures how far the distributions deviates over time. One striking result we find is that even when Γ, V, and Λ are all restricted to constant, the regret lower bound in the bandit setting still grows with T. The other highlight is that in the full-information setting, a constant regret becomes achievable with constant Γ and Λ, as it can be made independent of T, while with constant V and Λ, the regret still has a T1/3 dependency. We not only propose algorithms with upper bound guarantee, but prove their matching lower bounds as well.

Live content is unavailable. Log in and register to view live content