Timezone: »
We consider the problem of learning function classes computed by neural networks with various activations (e.g. ReLU or Sigmoid), a task believed to be computationally intractable in the worst-case. A major open problem is to understand the minimal assumptions under which these classes admit provably efficient algorithms. In this work we show that a natural distributional assumption corresponding to {\em eigenvalue decay} of the Gram matrix yields polynomial-time algorithms in the non-realizable setting for expressive classes of networks (e.g. feed-forward networks of ReLUs). We make no assumptions on the structure of the network or the labels. Given sufficiently-strong eigenvalue decay, we obtain {\em fully}-polynomial time algorithms in {\em all} the relevant parameters with respect to square-loss. This is the first purely distributional assumption that leads to polynomial-time algorithms for networks of ReLUs. Further, unlike prior distributional assumptions (e.g., the marginal distribution is Gaussian), eigenvalue decay has been observed in practice on common data sets.
Author Information
Surbhi Goel (University of Texas at Austin)
Adam Klivans (UT Austin)
More from the Same Authors
-
2022 : HotProtein: A Novel Framework for Protein Thermostability Prediction and Editing »
Tianlong Chen · Chengyue Gong · Daniel Diaz · Xuxi Chen · Jordan Wells · Qiang Liu · Zhangyang Wang · Andrew Ellington · Alex Dimakis · Adam Klivans -
2022 Poster: Hidden Progress in Deep Learning: SGD Learns Parities Near the Computational Limit »
Boaz Barak · Benjamin Edelman · Surbhi Goel · Sham Kakade · Eran Malach · Cyril Zhang -
2022 Poster: Recurrent Convolutional Neural Networks Learn Succinct Learning Algorithms »
Surbhi Goel · Sham Kakade · Adam Kalai · Cyril Zhang -
2022 Poster: Hardness of Noise-Free Learning for Two-Hidden-Layer Neural Networks »
Sitan Chen · Aravind Gollakota · Adam Klivans · Raghu Meka -
2021 Poster: Efficiently Learning One Hidden Layer ReLU Networks From Queries »
Sitan Chen · Adam Klivans · Raghu Meka -
2021 Poster: Gone Fishing: Neural Active Learning with Fisher Embeddings »
Jordan Ash · Surbhi Goel · Akshay Krishnamurthy · Sham Kakade -
2020 Poster: From Boltzmann Machines to Neural Networks and Back Again »
Surbhi Goel · Adam Klivans · Frederic Koehler -
2020 Poster: Statistical-Query Lower Bounds via Functional Gradients »
Surbhi Goel · Aravind Gollakota · Adam Klivans -
2019 Poster: Time/Accuracy Tradeoffs for Learning a ReLU with respect to Gaussian Marginals »
Surbhi Goel · Sushrut Karmalkar · Adam Klivans -
2019 Spotlight: Time/Accuracy Tradeoffs for Learning a ReLU with respect to Gaussian Marginals »
Surbhi Goel · Sushrut Karmalkar · Adam Klivans -
2019 Poster: List-decodable Linear Regression »
Sushrut Karmalkar · Adam Klivans · Pravesh Kothari -
2019 Spotlight: List-decodable Linear Regression »
Sushrut Karmalkar · Adam Klivans · Pravesh Kothari -
2014 Poster: Sparse Polynomial Learning and Graph Sketching »
Murat Kocaoglu · Karthikeyan Shanmugam · Alex Dimakis · Adam Klivans -
2014 Oral: Sparse Polynomial Learning and Graph Sketching »
Murat Kocaoglu · Karthikeyan Shanmugam · Alex Dimakis · Adam Klivans