Skip to yearly menu bar Skip to main content


Poster

On the Power and Limitations of Random Features for Understanding Neural Networks

Gilad Yehudai · Ohad Shamir

East Exhibition Hall B + C #224

Keywords: [ Optimization -> Non-Convex Optimization; Theory -> Hardness of Learning and Approximations; Theory ] [ Learning Theory ] [ Deep Learning ] [ Optimization for Deep Networks ]


Abstract: Recently, a spate of papers have provided positive theoretical results for training over-parameterized neural networks (where the network size is larger than what is needed to achieve low error). The key insight is that with sufficient over-parameterization, gradient-based methods will implicitly leave some components of the network relatively unchanged, so the optimization dynamics will behave as if those components are essentially fixed at their initial random values. In fact, fixing these \emph{explicitly} leads to the well-known approach of learning with random features (e.g. \citep{rahimi2008random,rahimi2009weighted}). In other words, these techniques imply that we can successfully learn with neural networks, whenever we can successfully learn with random features. In this paper, we formalize the link between existing results and random features, and argue that despite the impressive positive results, random feature approaches are also inherently limited in what they can explain. In particular, we prove that random features cannot be used to learn \emph{even a single ReLU neuron} (over standard Gaussian inputs in $\reals^d$ and $\text{poly}(d)$ weights), unless the network size (or magnitude of its weights) is exponentially large in $d$. Since a single neuron \emph{is} known to be learnable with gradient-based methods, we conclude that we are still far from a satisfying general explanation for the empirical success of neural networks. For completeness we also provide a simple self-contained proof, using a random features technique, that one-hidden-layer neural networks can learn low-degree polynomials.

Live content is unavailable. Log in and register to view live content