Timezone: »
Some of the simplest loss functions considered in Machine Learning are the square loss, the logistic loss and the hinge loss. The most common family of algorithms, including Gradient Descent (GD) with and without Weight Decay, always predict with a linear combination of the past instances. We give a random construction for sets of examples where the target linear weight vector is trivial to learn but any algorithm from the above family is drastically sub-optimal. Our lower bound on the latter algorithms holds even if the algorithms are enhanced with an arbitrary kernel function. This type of result was known for the square loss. However, we develop new techniques that let us prove such hardness results for any loss function satisfying some minimal requirements on the loss function (including the three listed above). We also show that algorithms that regularize with the squared Euclidean distance are easily confused by random features. Finally, we conclude by discussing related open problems regarding feed forward neural networks. We conjecture that our hardness results hold for any training algorithm that is based on the squared Euclidean distance regularization (i.e. Back-propagation with the Weight Decay heuristic).
Author Information
Michal Derezinski (University of Michigan)
Manfred K. Warmuth (Google Brain)
Related Events (a corresponding poster, oral, or spotlight)
-
2014 Poster: The limits of squared Euclidean distance regularization »
Thu. Dec 11th 12:00 -- 04:59 AM Room Level 2, room 210D
More from the Same Authors
-
2021 Spotlight: Newton-LESS: Sparsification without Trade-offs for the Sketched Newton Update »
Michal Derezinski · Jonathan Lacotte · Mert Pilanci · Michael Mahoney -
2021 Poster: Newton-LESS: Sparsification without Trade-offs for the Sketched Newton Update »
Michal Derezinski · Jonathan Lacotte · Mert Pilanci · Michael Mahoney -
2020 Poster: Debiasing Distributed Second Order Optimization with Surrogate Sketching and Scaled Regularization »
Michal Derezinski · Burak Bartan · Mert Pilanci · Michael Mahoney -
2020 Poster: Sampling from a k-DPP without looking at all items »
Daniele Calandriello · Michal Derezinski · Michal Valko -
2020 Spotlight: Sampling from a k-DPP without looking at all items »
Daniele Calandriello · Michal Derezinski · Michal Valko -
2020 Poster: Exact expressions for double descent and implicit regularization via surrogate random design »
Michal Derezinski · Feynman Liang · Michael Mahoney -
2020 Poster: Improved guarantees and a multiple-descent curve for Column Subset Selection and the Nystrom method »
Michal Derezinski · Rajiv Khanna · Michael Mahoney -
2020 Poster: Precise expressions for random projections: Low-rank approximation and randomized Newton »
Michal Derezinski · Feynman Liang · Zhenyu Liao · Michael Mahoney -
2020 Poster: Reparameterizing Mirror Descent as Gradient Descent »
Ehsan Amid · Manfred K. Warmuth -
2020 Oral: Improved guarantees and a multiple-descent curve for Column Subset Selection and the Nystrom method »
Michal Derezinski · Rajiv Khanna · Michael Mahoney -
2019 Workshop: Minding the Gap: Between Fairness and Ethics »
Igor Rubinov · Risi Kondor · Jack Poulson · Manfred K. Warmuth · Emanuel Moss · Alexa Hagerty -
2019 : Opening Remarks »
Jack Poulson · Manfred K. Warmuth -
2019 Poster: Distributed estimation of the inverse Hessian by determinantal averaging »
Michal Derezinski · Michael Mahoney -
2019 Poster: Exact sampling of determinantal point processes with sublinear time preprocessing »
Michal Derezinski · Daniele Calandriello · Michal Valko -
2019 Poster: Robust Bi-Tempered Logistic Loss Based on Bregman Divergences »
Ehsan Amid · Manfred K. Warmuth · Rohan Anil · Tomer Koren -
2018 Poster: Leveraged volume sampling for linear regression »
Michal Derezinski · Manfred K. Warmuth · Daniel Hsu -
2018 Spotlight: Leveraged volume sampling for linear regression »
Michal Derezinski · Manfred K. Warmuth · Daniel Hsu -
2017 Poster: Online Dynamic Programming »
Holakou Rahmanian · Manfred K. Warmuth -
2017 Poster: Unbiased estimates for linear regression via volume sampling »
Michal Derezinski · Manfred K. Warmuth -
2017 Spotlight: Unbiased estimates for linear regression via volume sampling »
Michal Derezinski · Manfred K. Warmuth -
2013 Workshop: Large Scale Matrix Analysis and Inference »
Reza Zadeh · Gunnar Carlsson · Michael Mahoney · Manfred K. Warmuth · Wouter M Koolen · Nati Srebro · Satyen Kale · Malik Magdon-Ismail · Ashish Goel · Matei A Zaharia · David Woodruff · Ioannis Koutis · Benjamin Recht -
2012 Poster: Putting Bayes to sleep »
Wouter M Koolen · Dmitri Adamskiy · Manfred K. Warmuth -
2012 Spotlight: Putting Bayes to sleep »
Wouter M Koolen · Dmitri Adamskiy · Manfred K. Warmuth -
2011 Poster: Learning Eigenvectors for Free »
Wouter M Koolen · Wojciech Kotlowski · Manfred K. Warmuth -
2010 Poster: Repeated Games against Budgeted Adversaries »
Jacob D Abernethy · Manfred K. Warmuth -
2007 Spotlight: Boosting Algorithms for Maximizing the Soft Margin »
Manfred K. Warmuth · Karen Glocer · Gunnar Rätsch -
2007 Poster: Boosting Algorithms for Maximizing the Soft Margin »
Manfred K. Warmuth · Karen Glocer · Gunnar Rätsch -
2006 Poster: Randomized PCA Algorithms with Regret Bounds that are Logarithmic in the Dimension »
Manfred K. Warmuth · Dima Kuzmin