Timezone: »
Poster
On the Power and Limitations of Random Features for Understanding Neural Networks
Gilad Yehudai · Ohad Shamir
Tue Dec 10 05:30 PM  07:30 PM (PST) @ East Exhibition Hall B + C #224
Recently, a spate of papers have provided positive theoretical results for training overparameterized neural networks (where the network size is larger than what is needed to achieve low error). The key insight is that with sufficient overparameterization, gradientbased methods will implicitly leave some components of the network relatively unchanged, so the optimization dynamics will behave as if those components are essentially fixed at their initial random values. In fact, fixing these \emph{explicitly} leads to the wellknown approach of learning with random features (e.g. \citep{rahimi2008random,rahimi2009weighted}). In other words, these techniques imply that we can successfully learn with neural networks, whenever we can successfully learn with random features. In this paper, we formalize the link between existing results and random features, and argue that despite the impressive positive results, random feature approaches are also inherently limited in what they can explain. In particular, we prove that random features cannot be used to learn \emph{even a single ReLU neuron} (over standard Gaussian inputs in $\reals^d$ and $\text{poly}(d)$ weights), unless the network size (or magnitude of its weights) is exponentially large in $d$. Since a single neuron \emph{is} known to be learnable with gradientbased methods, we conclude that we are still far from a satisfying general explanation for the empirical success of neural networks. For completeness we also provide a simple selfcontained proof, using a random features technique, that onehiddenlayer neural networks can learn lowdegree polynomials.
Author Information
Gilad Yehudai (Weizmann Institute of Science)
Ohad Shamir (Weizmann Institute of Science)
More from the Same Authors

2021 Spotlight: Random Shuffling Beats SGD Only After Many Epochs on IllConditioned Problems »
Itay Safran · Ohad Shamir 
2022 : On the Complexity of Finding Small Subgradients in Nonsmooth Optimization »
Guy Kornowski · Ohad Shamir 
2022 : On the Complexity of Finding Small Subgradients in Nonsmooth Optimization »
Guy Kornowski · Ohad Shamir 
2022 Poster: On Margin Maximization in Linear and ReLU Networks »
Gal Vardi · Ohad Shamir · Nati Srebro 
2022 Poster: The Sample Complexity of OneHiddenLayer Neural Networks »
Gal Vardi · Ohad Shamir · Nati Srebro 
2022 Poster: Reconstructing Training Data From Trained Neural Networks »
Niv Haim · Gal Vardi · Gilad Yehudai · Michal Irani · Ohad Shamir 
2022 Poster: Gradient Methods Provably Converge to NonRobust Networks »
Gal Vardi · Gilad Yehudai · Ohad Shamir 
2021 Poster: Learning a Single Neuron with Bias Using Gradient Descent »
Gal Vardi · Gilad Yehudai · Ohad Shamir 
2021 Poster: Oracle Complexity in Nonsmooth Nonconvex Optimization »
Guy Kornowski · Ohad Shamir 
2021 Poster: A Stochastic Newton Algorithm for Distributed Convex Optimization »
Brian Bullins · Kshitij Patel · Ohad Shamir · Nathan Srebro · Blake Woodworth 
2021 Oral: Oracle Complexity in Nonsmooth Nonconvex Optimization »
Guy Kornowski · Ohad Shamir 
2021 Poster: Random Shuffling Beats SGD Only After Many Epochs on IllConditioned Problems »
Itay Safran · Ohad Shamir 
2020 : Poster Session 1 (gather.town) »
Laurent Condat · Tiffany Vlaar · Ohad Shamir · Mohammadi Zaki · Zhize Li · GuanHorng Liu · Samuel HorvĂˇth · Mher Safaryan · Yoni Choukroun · Kumar Shridhar · Nabil Kahale · Jikai Jin · Pratik Kumar Jawanpuria · Gaurav Kumar Yadav · Kazuki Koyama · Junyoung Kim · Xiao Li · Saugata Purkayastha · Adil Salim · Dighanchal Banerjee · Peter Richtarik · Lakshman Mahto · Tian Ye · Bamdev Mishra · Huikang Liu · Jiajie Zhu 
2020 : Contributed talks in Session 1 (Zoom) »
Sebastian Stich · Laurent Condat · Zhize Li · Ohad Shamir · Tiffany Vlaar · Mohammadi Zaki 
2020 : Contributed Video: Can We Find NearApproximatelyStationary Points of Nonsmooth Nonconvex Functions?, Ohad Shamir »
Ohad Shamir 
2020 Poster: Neural Networks with Small Weights and DepthSeparation Barriers »
Gal Vardi · Ohad Shamir 
2018 Poster: Are ResNets Provably Better than Linear Predictors? »
Ohad Shamir 
2018 Poster: Global Nonconvex Optimization with Discretized Diffusions »
Murat Erdogdu · Lester Mackey · Ohad Shamir 
2016 Poster: DimensionFree Iteration Complexity of Finite Sum Optimization Problems »
Yossi Arjevani · Ohad Shamir 
2016 Poster: WithoutReplacement Sampling for Stochastic Gradient Methods »
Ohad Shamir 
2016 Oral: WithoutReplacement Sampling for Stochastic Gradient Methods »
Ohad Shamir 
2015 Poster: Communication Complexity of Distributed Convex Learning and Optimization »
Yossi Arjevani · Ohad Shamir 
2014 Poster: Fundamental Limits of Online and Distributed Algorithms for Statistical Learning and Estimation »
Ohad Shamir 
2014 Poster: On the Computational Efficiency of Training Neural Networks »
Roi Livni · Shai ShalevShwartz · Ohad Shamir 
2013 Poster: Online Learning with Switching Costs and Other Adaptive Adversaries »
NicolĂ˛ CesaBianchi · Ofer Dekel · Ohad Shamir