Timezone: »
We consider the problem of regret minimization in non-parametric stochastic bandits. When the rewards are known to be bounded from above, there exists asymptotically optimal algorithms, with asymptotic regret depending on an infimum of Kullback-Leibler divergences (KL). These algorithms are computationally expensive and require storing all past rewards, thus simpler but non-optimal algorithms are often used instead. We introduce several methods to approximate the infimum KL which reduce drastically the computational and memory costs of existing optimal algorithms, while keeping their regret guaranties. We apply our findings to design new variants of the MED and IMED algorithms, and demonstrate their interest with extensive numerical simulations.
Author Information
Dorian Baudry (Ecole Polytechnique)
Fabien Pesquerel (INRIA)
Rémy Degenne (Inria)
Odalric-Ambrym Maillard (INRIA Lille Nord Europe)
More from the Same Authors
-
2021 Spotlight: Online Sign Identification: Minimization of the Number of Errors in Thresholding Bandits »
Reda Ouhamma · Odalric-Ambrym Maillard · Vianney Perchet -
2023 Poster: An $\varepsilon$-Best-Arm Identification Algorithm for Fixed-Confidence and Beyond »
Marc Jourdan · Rémy Degenne · Emilie Kaufmann -
2023 Poster: Non-Asymptotic Analysis of a UCB-based Top Two Algorithm »
Marc Jourdan · Rémy Degenne -
2022 Poster: Top Two Algorithms Revisited »
Marc Jourdan · Rémy Degenne · Dorian Baudry · Rianne de Heide · Emilie Kaufmann -
2022 Poster: IMED-RL: Regret optimal learning of ergodic Markov decision processes »
Fabien Pesquerel · Odalric-Ambrym Maillard -
2022 Poster: On Elimination Strategies for Bandit Fixed-Confidence Identification »
Andrea Tirinzoni · Rémy Degenne -
2022 Poster: Efficient Change-Point Detection for Tackling Piecewise-Stationary Bandits »
Lilian Besson · Emilie Kaufmann · Odalric-Ambrym Maillard · Julien Seznec -
2021 Poster: Stochastic bandits with groups of similar arms. »
Fabien Pesquerel · Hassan SABER · Odalric-Ambrym Maillard -
2021 Poster: Indexed Minimum Empirical Divergence for Unimodal Bandits »
Hassan SABER · Pierre Ménard · Odalric-Ambrym Maillard -
2021 Poster: Stochastic Online Linear Regression: the Forward Algorithm to Replace Ridge »
Reda Ouhamma · Odalric-Ambrym Maillard · Vianney Perchet -
2021 Poster: From Optimality to Robustness: Adaptive Re-Sampling Strategies in Stochastic Bandits »
Dorian Baudry · Patrick Saux · Odalric-Ambrym Maillard -
2021 Poster: Online Sign Identification: Minimization of the Number of Errors in Thresholding Bandits »
Reda Ouhamma · Odalric-Ambrym Maillard · Vianney Perchet -
2020 Poster: Sub-sampling for Efficient Non-Parametric Bandit Exploration »
Dorian Baudry · Emilie Kaufmann · Odalric-Ambrym Maillard -
2020 Spotlight: Sub-sampling for Efficient Non-Parametric Bandit Exploration »
Dorian Baudry · Emilie Kaufmann · Odalric-Ambrym Maillard -
2019 Poster: Pure Exploration with Multiple Correct Answers »
Rémy Degenne · Wouter Koolen -
2019 Poster: Non-Asymptotic Pure Exploration by Solving Games »
Rémy Degenne · Wouter Koolen · Pierre Ménard -
2016 Poster: Combinatorial semi-bandit with known covariance »
Rémy Degenne · Vianney Perchet