Timezone: »
Boosting is a widely used machine learning approach based on the idea of aggregating weak learning rules. While in statistical learning numerous boosting methods exist both in the realizable and agnostic settings, in online learning they exist only in the realizable case. In this work we provide the first agnostic online boosting algorithm; that is, given a weak learner with only marginally-better-than-trivial regret guarantees, our algorithm boosts it to a strong learner with sublinear regret. Our algorithm is based on an abstract (and simple) reduction to online convex optimization, which efficiently converts an arbitrary online convex optimizer to an online booster. Moreover, this reduction extends to the statistical as well as the online realizable settings, thus unifying the 4 cases of statistical/online and agnostic/realizable boosting.
Author Information
Nataly Brukhim (Princeton University)
Xinyi Chen (Princeton University)
Elad Hazan (Princeton University)
Shay Moran (Technion)
More from the Same Authors
-
2020 Poster: Geometric Exploration for Online Control »
Orestis Plevrakis · Elad Hazan -
2020 Poster: Non-Stochastic Control with Bandit Feedback »
Paula Gradu · John Hallman · Elad Hazan -
2020 AffinityWorkshop: Women in Machine Learning »
Xinyi Chen · Erin Grant · Kristy Choi · Krystal Maughan · Xenia Miscouridou · Hanwen Shen · Raquel Aoki · Belén Saldías · Mel Woghiren · Elizabeth Wood -
2020 Poster: Synthetic Data Generators -- Sequential and Private »
Olivier Bousquet · Roi Livni · Shay Moran -
2020 Poster: Learning from Mixtures of Private and Public Populations »
Raef Bassily · Shay Moran · Anupama Nandi -
2020 Poster: A Limitation of the PAC-Bayes Framework »
Roi Livni · Shay Moran -
2019 Poster: Private Learning Implies Online Learning: An Efficient Reduction »
Alon Gonen · Elad Hazan · Shay Moran -
2019 Spotlight: Private Learning Implies Online Learning: An Efficient Reduction »
Alon Gonen · Elad Hazan · Shay Moran -
2019 Poster: An adaptive nearest neighbor rule for classification »
Akshay Balsubramani · Sanjoy Dasgupta · yoav Freund · Shay Moran -
2019 Spotlight: An adaptive nearest neighbor rule for classification »
Akshay Balsubramani · Sanjoy Dasgupta · yoav Freund · Shay Moran -
2019 Poster: Learning to Screen »
Alon Cohen · Avinatan Hassidim · Haim Kaplan · Yishay Mansour · Shay Moran -
2019 Poster: Limits of Private Learning with Access to Public Data »
Raef Bassily · Shay Moran · Noga Alon -
2019 Poster: Logarithmic Regret for Online Control »
Naman Agarwal · Elad Hazan · Karan Singh -
2019 Oral: Logarithmic Regret for Online Control »
Naman Agarwal · Elad Hazan · Karan Singh -
2018 Poster: Online Improper Learning with an Approximation Oracle »
Elad Hazan · Wei Hu · Yuanzhi Li · Zhiyuan Li -
2018 Poster: Online Learning of Quantum States »
Scott Aaronson · Xinyi Chen · Elad Hazan · Satyen Kale · Ashwin Nayak -
2018 Poster: Spectral Filtering for General Linear Dynamical Systems »
Elad Hazan · Holden Lee · Karan Singh · Cyril Zhang · Yi Zhang -
2018 Oral: Spectral Filtering for General Linear Dynamical Systems »
Elad Hazan · Holden Lee · Karan Singh · Cyril Zhang · Yi Zhang -
2017 Poster: Submultiplicative Glivenko-Cantelli and Uniform Convergence of Revenues »
Noga Alon · Moshe Babaioff · Yannai A. Gonczarowski · Yishay Mansour · Shay Moran · Amir Yehudayoff -
2017 Spotlight: Submultiplicative Glivenko-Cantelli and Uniform Convergence of Revenues »
Noga Alon · Moshe Babaioff · Yannai A. Gonczarowski · Yishay Mansour · Shay Moran · Amir Yehudayoff -
2017 Poster: Linear Convergence of a Frank-Wolfe Type Algorithm over Trace-Norm Balls »
Zeyuan Allen-Zhu · Elad Hazan · Wei Hu · Yuanzhi Li -
2017 Poster: Learning Linear Dynamical Systems via Spectral Filtering »
Elad Hazan · Karan Singh · Cyril Zhang -
2017 Spotlight: Online Learning of Linear Dynamical Systems »
Elad Hazan · Karan Singh · Cyril Zhang -
2017 Spotlight: Linear Convergence of a Frank-Wolfe Type Algorithm over Trace-Norm Balls »
Zeyuan Allen-Zhu · Elad Hazan · Wei Hu · Yuanzhi Li -
2016 Poster: Optimal Black-Box Reductions Between Optimization Objectives »
Zeyuan Allen-Zhu · Elad Hazan -
2016 Poster: Supervised learning through the lens of compression »
Ofir David · Shay Moran · Amir Yehudayoff -
2016 Oral: Supervised learning through the lens of compression »
Ofir David · Shay Moran · Amir Yehudayoff -
2016 Poster: A Non-generative Framework and Convex Relaxations for Unsupervised Learning »
Elad Hazan · Tengyu Ma -
2016 Poster: The Limits of Learning with Missing Data »
Brian Bullins · Elad Hazan · Tomer Koren -
2015 Poster: Online Learning for Adversaries with Memory: Price of Past Mistakes »
Oren Anava · Elad Hazan · Shie Mannor -
2015 Poster: Beyond Convexity: Stochastic Quasi-Convex Optimization »
Elad Hazan · Kfir Y. Levy · Shai Shalev-Shwartz -
2015 Poster: Online Gradient Boosting »
Alina Beygelzimer · Elad Hazan · Satyen Kale · Haipeng Luo -
2009 Poster: On Stochastic and Worst-case Models for Investing »
Elad Hazan · Satyen Kale -
2009 Oral: On Stochastic and Worst-case Models for Investing »
Elad Hazan · Satyen Kale -
2009 Poster: An Efficient Interior-Point Method for Minimum-Regret Learning in Online Convex Optimization »
Elad Hazan · Nimrod Megiddo -
2009 Spotlight: An Efficient Interior-Point Method for Minimum-Regret Learning in Online Convex Optimization »
Elad Hazan · Nimrod Megiddo -
2009 Poster: Beyond Convexity: Online Submodular Minimization »
Elad Hazan · Satyen Kale -
2007 Oral: Adaptive Online Gradient Descent »
Peter Bartlett · Elad Hazan · Sasha Rakhlin -
2007 Poster: Adaptive Online Gradient Descent »
Peter Bartlett · Elad Hazan · Sasha Rakhlin -
2007 Poster: Computational Equivalence of Fixed Points and No Regret Algorithms, and Convergence to Equilibria »
Elad Hazan · Satyen Kale