Timezone: »
Much of the work in online learning focuses on the study of sublinear upper bounds on the regret. In this work, we initiate the study of best-case lower bounds in online convex optimization, wherein we bound the largest \emph{improvement} an algorithm can obtain relative to the single best action in hindsight. This problem is motivated by the goal of better understanding the adaptivity of a learning algorithm. Another motivation comes from fairness: it is known that best-case lower bounds are instrumental in obtaining algorithms for decision-theoretic online learning (DTOL) that satisfy a notion of group fairness. Our contributions are a general method to provide best-case lower bounds in Follow The Regularized Leader (FTRL) algorithms with time-varying regularizers, which we use to show that best-case lower bounds are of the same order as existing upper regret bounds: this includes situations with a fixed learning rate, decreasing learning rates, timeless methods, and adaptive gradient methods. In stark contrast, we show that the linearized version of FTRL can attain negative linear regret. Finally, in DTOL with two experts and binary losses, we fully characterize the best-case sequences, which provides a finer understanding of the best-case lower bounds.
Author Information
Cristóbal Guzmán (U of Twente)
Nishant Mehta (University of Victoria)
Ali Mortazavi (University of Victoria)
Working on Learning Theory, Online Learning, Algorithms under uncertainty
More from the Same Authors
-
2022 : Contributed Talks 3 »
Cristóbal Guzmán · Fangshuo Liao · Vishwak Srinivasan · Zhiyuan Li -
2022 Workshop: OPT 2022: Optimization for Machine Learning »
Courtney Paquette · Sebastian Stich · Quanquan Gu · Cristóbal Guzmán · John Duchi -
2022 Poster: Stochastic Halpern Iteration with Variance Reduction for Stochastic Monotone Inclusions »
Xufeng Cai · Chaobing Song · Cristóbal Guzmán · Jelena Diakonikolas -
2022 Poster: Differentially Private Generalized Linear Models Revisited »
Raman Arora · Raef Bassily · Cristóbal Guzmán · Michael Menart · Enayat Ullah -
2022 Poster: Between Stochastic and Adversarial Online Convex Optimization: Improved Regret Bounds via Smoothness »
Sarah Sachs · Hedi Hadiji · Tim van Erven · Cristóbal Guzmán -
2021 : Q&A with Cristóbal Guzmán »
Cristóbal Guzmán -
2021 : Non-Euclidean Differentially Private Stochastic Convex Optimization, Cristóbal Guzmán »
Cristóbal Guzmán -
2021 Poster: Differentially Private Stochastic Optimization: New Results in Convex and Non-Convex Settings »
Raef Bassily · Cristóbal Guzmán · Michael Menart -
2020 Poster: Stability of Stochastic Gradient Descent on Nonsmooth Convex Losses »
Raef Bassily · Vitaly Feldman · Cristóbal Guzmán · Kunal Talwar -
2020 Spotlight: Stability of Stochastic Gradient Descent on Nonsmooth Convex Losses »
Raef Bassily · Vitaly Feldman · Cristóbal Guzmán · Kunal Talwar -
2019 Poster: Dying Experts: Efficient Algorithms with Optimal Regret Bounds »
Hamid Shayestehmanesh · Sajjad Azami · Nishant Mehta