Timezone: »
Poster
Adaptation to Easy Data in Prediction with Limited Advice
Tobias Sommer Thune · Yevgeny Seldin
We derive an online learning algorithm with improved regret guarantees for ``easy'' loss sequences. We consider two types of ``easiness'': (a) stochastic loss sequences and (b) adversarial loss sequences with small effective range of the losses. While a number of algorithms have been proposed for exploiting small effective range in the full information setting, Gerchinovitz and Lattimore [2016] have shown the impossibility of regret scaling with the effective range of the losses in the bandit setting. We show that just one additional observation per round is sufficient to circumvent the impossibility result. The proposed Second Order Difference Adjustments (SODA) algorithm requires no prior knowledge of the effective range of the losses, $\varepsilon$, and achieves an $O(\varepsilon \sqrt{KT \ln K}) + \tilde{O}(\varepsilon K \sqrt[4]{T})$ expected regret guarantee, where $T$ is the time horizon and $K$ is the number of actions. The scaling with the effective loss range is achieved under significantly weaker assumptions than those made by Cesa-Bianchi and Shamir [2018] in an earlier attempt to circumvent the impossibility result. We also provide a regret lower bound of $\Omega(\varepsilon\sqrt{T K})$, which almost matches the upper bound. In addition, we show that in the stochastic setting SODA achieves an $O\left(\sum_{a:\Delta_a>0} \frac{K\varepsilon^2}{\Delta_a}\right)$ pseudo-regret bound that holds simultaneously with the adversarial regret guarantee. In other words, SODA is safe against an unrestricted oblivious adversary and provides improved regret guarantees for at least two different types of ``easiness'' simultaneously.
Author Information
Tobias Sommer Thune (University of Copenhagen)
Yevgeny Seldin (University of Copenhagen)
More from the Same Authors
-
2021 Spotlight: Online Active Learning with Surrogate Loss Functions »
Giulia DeSalvo · Claudio Gentile · Tobias Sommer Thune -
2021 Poster: Online Active Learning with Surrogate Loss Functions »
Giulia DeSalvo · Claudio Gentile · Tobias Sommer Thune -
2019 Poster: Nonstochastic Multiarmed Bandits with Unrestricted Delays »
Tobias Sommer Thune · Nicolò Cesa-Bianchi · Yevgeny Seldin -
2018 Poster: Factored Bandits »
Julian Zimmert · Yevgeny Seldin -
2017 : Yevgeny Seldin - A Strongly Quasiconvex PAC-Bayesian Bound »
Yevgeny Seldin -
2013 Workshop: Resource-Efficient Machine Learning »
Yevgeny Seldin · Yasin Abbasi Yadkori · Yacov Crammer · Ralf Herbrich · Peter Bartlett -
2013 Poster: PAC-Bayes-Empirical-Bernstein Inequality »
Ilya Tolstikhin · Yevgeny Seldin -
2013 Spotlight: PAC-Bayes-Empirical-Bernstein Inequality »
Ilya Tolstikhin · Yevgeny Seldin -
2013 Poster: Online Learning in Markov Decision Processes with Adversarially Chosen Transition Probability Distributions »
Yasin Abbasi Yadkori · Peter Bartlett · Varun Kanade · Yevgeny Seldin · Csaba Szepesvari -
2012 Workshop: Multi-Trade-offs in Machine Learning »
Yevgeny Seldin · Guy Lever · John Shawe-Taylor · Nicolò Cesa-Bianchi · Yacov Crammer · Francois Laviolette · Gabor Lugosi · Peter Bartlett -
2011 Workshop: New Frontiers in Model Order Selection »
Yevgeny Seldin · Yacov Crammer · Nicolò Cesa-Bianchi · Francois Laviolette · John Shawe-Taylor -
2011 Poster: PAC-Bayesian Analysis of Contextual Bandits »
Yevgeny Seldin · Peter Auer · Francois Laviolette · John Shawe-Taylor · Ronald Ortner -
2006 Poster: Information Bottleneck for Non Co-Occurrence Data »
Yevgeny Seldin · Noam Slonim · Naftali Tishby