Timezone: »
Spotlight
More data speeds up training time in learning halfspaces over sparse vectors
Amit Daniely · Nati Linial · Shai ShalevShwartz
Sat Dec 07 03:38 PM  03:42 PM (PST) @ Harvey's Convention Center Floor, CC
The increased availability of data in recent years led several authors to ask whether it is possible to use data as a {\em computational} resource. That is, if more data is available, beyond the sample complexity limit, is it possible to use the extra examples to speed up the computation time required to perform the learning task? We give the first positive answer to this question for a {\em natural supervised learning problem}  we consider agnostic PAC learning of halfspaces over $3$sparse vectors in $\{1,1,0\}^n$. This class is inefficiently learnable using $O\left(n/\epsilon^2\right)$ examples. Our main contribution is a novel, noncryptographic, methodology for establishing computationalstatistical gaps, which allows us to show that, under a widely believed assumption that refuting random $\mathrm{3CNF}$ formulas is hard, efficiently learning this class using O\left(n/\epsilon^2\right)$ examples is impossible. We further show that under stronger hardness assumptions, even $O\left(n^{1.499}/\epsilon^2\right)$ examples do not suffice. On the other hand, we show a new algorithm that learns this class efficiently using $\tilde{\Omega}\left(n^2/\epsilon^2\right)$ examples. This formally establishes the tradeoff between sample and computational complexity for a natural supervised learning problem.
Author Information
Amit Daniely (Hebrew University and Google Research)
Nati Linial (The Hebrew University)
Shai ShalevShwartz (Mobileye & HUJI)
Related Events (a corresponding poster, oral, or spotlight)

2013 Poster: More data speeds up training time in learning halfspaces over sparse vectors »
Sun. Dec 8th 03:00  07:59 AM Room Harrah's Special Events Center, 2nd Floor
More from the Same Authors

2022 Poster: Knowledge Distillation: Bad Models Can Be Good Role Models »
Gal Kaplun · Eran Malach · Preetum Nakkiran · Shai ShalevShwartz 
2021 : Q&A with Shai ShalevShwartz »
Shai ShalevShwartz 
2021 : Deep Learning: Success, Failure, and the Border between them, Shai ShalevShwartz »
Shai ShalevShwartz 
2020 Poster: Neural Networks Learning and Memorization with (almost) no OverParameterization »
Amit Daniely 
2020 Poster: The Implications of Local Correlation on Learning Some Deep Functions »
Eran Malach · Shai ShalevShwartz 
2020 Poster: Most ReLU Networks Suffer from $\ell^2$ Adversarial Perturbations »
Amit Daniely · Hadas Shacham 
2020 Spotlight: Most ReLU Networks Suffer from $\ell^2$ Adversarial Perturbations »
Amit Daniely · Hadas Shacham 
2020 Poster: Learning Parities with Neural Networks »
Amit Daniely · Eran Malach 
2020 Poster: Hardness of Learning Neural Networks with Natural Weights »
Amit Daniely · Gal Vardi 
2020 Oral: Learning Parities with Neural Networks »
Amit Daniely · Eran Malach 
2019 Poster: Locally Private Learning without Interaction Requires Separation »
Amit Daniely · Vitaly Feldman 
2019 Poster: Generalization Bounds for Neural Networks via Approximate Description Length »
Amit Daniely · Elad Granot 
2019 Spotlight: Generalization Bounds for Neural Networks via Approximate Description Length »
Amit Daniely · Elad Granot 
2019 Poster: Is Deeper Better only when Shallow is Good? »
Eran Malach · Shai ShalevShwartz 
2017 Poster: Decoupling "when to update" from "how to update" »
Eran Malach · Shai ShalevShwartz 
2017 Poster: SGD Learns the Conjugate Kernel Class of the Network »
Amit Daniely 
2016 Poster: Toward Deeper Understanding of Neural Networks: The Power of Initialization and a Dual View on Expressivity »
Amit Daniely · Roy Frostig · Yoram Singer 
2016 Poster: Learning a Metric Embedding for Face Recognition using the Multibatch Method »
Oren Tadmor · Tal Rosenwein · Shai ShalevShwartz · Yonatan Wexler · Amnon Shashua 
2015 Poster: Beyond Convexity: Stochastic QuasiConvex Optimization »
Elad Hazan · Kfir Y. Levy · Shai ShalevShwartz 
2014 Poster: On the Computational Efficiency of Training Neural Networks »
Roi Livni · Shai ShalevShwartz · Ohad Shamir 
2013 Poster: Accelerated MiniBatch Stochastic Dual Coordinate Ascent »
Shai ShalevShwartz · Tong Zhang 
2012 Poster: Multiclass Learning Approaches: A Theoretical Comparison with Implications »
Amit Daniely · Sivan Sabato · Shai ShalevShwartz 
2012 Spotlight: Multiclass Learning Approaches: A Theoretical Comparison with Implications »
Amit Daniely · Sivan Sabato · Shai ShalevShwartz 
2012 Poster: Learning Halfspaces with the ZeroOne Loss: TimeAccuracy Tradeoffs »
Aharon Birnbaum · Shai ShalevShwartz 
2011 Poster: ShareBoost: Efficient multiclass learning with feature sharing »
Shai ShalevShwartz · Yonatan Wexler · Amnon Shashua 
2011 Session: Spotlight Session 4 »
Shai ShalevShwartz 
2011 Session: Oral Session 4 »
Shai ShalevShwartz 
2008 Poster: Fast Rates for Regularized Objectives »
Karthik Sridharan · Shai ShalevShwartz · Nati Srebro 
2008 Poster: Mind the Duality Gap: Logarithmic regret algorithms for online optimization »
Shai ShalevShwartz · Sham M Kakade 
2008 Spotlight: Mind the Duality Gap: Logarithmic regret algorithms for online optimization »
Shai ShalevShwartz · Sham M Kakade 
2006 Poster: Online Classification for Complex Problems Using Simultaneous Projections »
Yonatan Amit · Shai ShalevShwartz · Yoram Singer 
2006 Poster: Convex Repeated Games and Fenchel Duality »
Shai ShalevShwartz · Yoram Singer 
2006 Spotlight: Convex Repeated Games and Fenchel Duality »
Shai ShalevShwartz · Yoram Singer