Timezone: »
Poster
Statistical-Query Lower Bounds via Functional Gradients
Surbhi Goel · Aravind Gollakota · Adam Klivans
We give the first statistical-query lower bounds for agnostically learning any non-polynomial activation with respect to Gaussian marginals (e.g., ReLU, sigmoid, sign). For the specific problem of ReLU regression (equivalently, agnostically learning a ReLU), we show that any statistical-query algorithm with tolerance $n^{-(1/\epsilon)^b}$ must use at least $2^{n^c} \epsilon$ queries for some constants $b, c > 0$, where $n$ is the dimension and $\epsilon$ is the accuracy parameter. Our results rule out {\em general} (as opposed to correlational) SQ learning algorithms, which is unusual for real-valued learning problems. Our techniques involve a gradient boosting procedure for ``amplifying'' recent lower bounds due to Diakonikolas et al.\ (COLT 2020) and Goel et al.\ (ICML 2020) on the SQ dimension of functions computed by two-layer neural networks. The crucial new ingredient is the use of a nonstandard convex functional during the boosting procedure. This also yields a best-possible reduction between two commonly studied models of learning: agnostic learning and probabilistic concepts.
Author Information
Surbhi Goel (Microsoft Research NYC)
Aravind Gollakota (University of Texas at Austin)
Adam Klivans (UT Austin)
More from the Same Authors
-
2022 : HotProtein: A Novel Framework for Protein Thermostability Prediction and Editing »
Tianlong Chen · Chengyue Gong · Daniel Diaz · Xuxi Chen · Jordan Wells · Qiang Liu · Zhangyang Wang · Andrew Ellington · Alex Dimakis · Adam Klivans -
2022 Panel: Panel 1A-4: Hardness of Noise-Free… & Adversarially Robust Learning:… »
Aravind Gollakota · Omar Montasser -
2022 Poster: Hidden Progress in Deep Learning: SGD Learns Parities Near the Computational Limit »
Boaz Barak · Benjamin Edelman · Surbhi Goel · Sham Kakade · Eran Malach · Cyril Zhang -
2022 Poster: Recurrent Convolutional Neural Networks Learn Succinct Learning Algorithms »
Surbhi Goel · Sham Kakade · Adam Kalai · Cyril Zhang -
2022 Poster: Hardness of Noise-Free Learning for Two-Hidden-Layer Neural Networks »
Sitan Chen · Aravind Gollakota · Adam Klivans · Raghu Meka -
2021 Poster: Efficiently Learning One Hidden Layer ReLU Networks From Queries »
Sitan Chen · Adam Klivans · Raghu Meka -
2021 Poster: Gone Fishing: Neural Active Learning with Fisher Embeddings »
Jordan Ash · Surbhi Goel · Akshay Krishnamurthy · Sham Kakade -
2020 Poster: From Boltzmann Machines to Neural Networks and Back Again »
Surbhi Goel · Adam Klivans · Frederic Koehler -
2019 Poster: Time/Accuracy Tradeoffs for Learning a ReLU with respect to Gaussian Marginals »
Surbhi Goel · Sushrut Karmalkar · Adam Klivans -
2019 Spotlight: Time/Accuracy Tradeoffs for Learning a ReLU with respect to Gaussian Marginals »
Surbhi Goel · Sushrut Karmalkar · Adam Klivans -
2019 Poster: List-decodable Linear Regression »
Sushrut Karmalkar · Adam Klivans · Pravesh Kothari -
2019 Spotlight: List-decodable Linear Regression »
Sushrut Karmalkar · Adam Klivans · Pravesh Kothari -
2017 Poster: Eigenvalue Decay Implies Polynomial-Time Learnability for Neural Networks »
Surbhi Goel · Adam Klivans -
2014 Poster: Sparse Polynomial Learning and Graph Sketching »
Murat Kocaoglu · Karthikeyan Shanmugam · Alex Dimakis · Adam Klivans -
2014 Oral: Sparse Polynomial Learning and Graph Sketching »
Murat Kocaoglu · Karthikeyan Shanmugam · Alex Dimakis · Adam Klivans