Timezone: »

 
Poster
On the Complexity of Learning Neural Networks
Le Song · Santosh Vempala · John Wilmes · Bo Xie

Tue Dec 05 06:30 PM -- 10:30 PM (PST) @ Pacific Ballroom #206 #None

The stunning empirical successes of neural networks currently lack rigorous theoretical explanation. What form would such an explanation take, in the face of existing complexity-theoretic lower bounds? A first step might be to show that data generated by neural networks with a single hidden layer, smooth activation functions and benign input distributions can be learned efficiently. We demonstrate here a comprehensive lower bound ruling out this possibility: for a wide class of activation functions (including all currently used), and inputs drawn from any logconcave distribution, there is a family of one-hidden-layer functions whose output is a sum gate that are hard to learn in a precise sense: any statistical query algorithm (which includes all known variants of stochastic gradient descent with any loss function) needs an exponential number of queries even using tolerance inversely proportional to the input dimensionality. Moreover, this hard family of functions is realizable with a small (sublinear in dimension) number of activation units in the single hidden layer. The lower bound is also robust to small perturbations of the true weights. Systematic experiments illustrate a phase transition in the training error as predicted by the analysis.

Author Information

Le Song (Ant Financial & Georgia Institute of Technology)
Santosh Vempala (Georgia Tech)
John Wilmes (Georgia Institute of Technology)
Bo Xie (Georgia Tech)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors