Timezone: »
Poster
On the Computational Efficiency of Training Neural Networks
Roi Livni · Shai Shalev-Shwartz · Ohad Shamir
It is well-known that neural networks are computationally hard to train. On the other hand, in practice, modern day neural networks are trained efficiently using SGD and a variety of tricks that include different activation functions (e.g. ReLU), over-specification (i.e., train networks which are larger than needed), and regularization. In this paper we revisit the computational complexity of training neural networks from a modern perspective. We provide both positive and negative results, some of them yield new provably efficient and practical algorithms for training neural networks.
Author Information
Roi Livni (Hebrew University)
Shai Shalev-Shwartz (Mobileye & HUJI)
Ohad Shamir (Weizmann Institute of Science)
More from the Same Authors
-
2021 Spotlight: Random Shuffling Beats SGD Only After Many Epochs on Ill-Conditioned Problems »
Itay Safran · Ohad Shamir -
2022 : On the Complexity of Finding Small Subgradients in Nonsmooth Optimization »
Guy Kornowski · Ohad Shamir -
2022 : On the Complexity of Finding Small Subgradients in Nonsmooth Optimization »
Guy Kornowski · Ohad Shamir -
2022 Poster: On Margin Maximization in Linear and ReLU Networks »
Gal Vardi · Ohad Shamir · Nati Srebro -
2022 Poster: The Sample Complexity of One-Hidden-Layer Neural Networks »
Gal Vardi · Ohad Shamir · Nati Srebro -
2022 Poster: Reconstructing Training Data From Trained Neural Networks »
Niv Haim · Gal Vardi · Gilad Yehudai · Ohad Shamir · Michal Irani -
2022 Poster: Knowledge Distillation: Bad Models Can Be Good Role Models »
Gal Kaplun · Eran Malach · Preetum Nakkiran · Shai Shalev-Shwartz -
2022 Poster: Gradient Methods Provably Converge to Non-Robust Networks »
Gal Vardi · Gilad Yehudai · Ohad Shamir -
2021 : Q&A with Shai Shalev-Shwartz »
Shai Shalev-Shwartz -
2021 : Deep Learning: Success, Failure, and the Border between them, Shai Shalev-Shwartz »
Shai Shalev-Shwartz -
2021 Poster: Learning a Single Neuron with Bias Using Gradient Descent »
Gal Vardi · Gilad Yehudai · Ohad Shamir -
2021 Poster: Oracle Complexity in Nonsmooth Nonconvex Optimization »
Guy Kornowski · Ohad Shamir -
2021 Poster: A Stochastic Newton Algorithm for Distributed Convex Optimization »
Brian Bullins · Kshitij Patel · Ohad Shamir · Nathan Srebro · Blake Woodworth -
2021 Oral: Oracle Complexity in Nonsmooth Nonconvex Optimization »
Guy Kornowski · Ohad Shamir -
2021 Poster: Random Shuffling Beats SGD Only After Many Epochs on Ill-Conditioned Problems »
Itay Safran · Ohad Shamir -
2020 : Poster Session 1 (gather.town) »
Laurent Condat · Tiffany Vlaar · Ohad Shamir · Mohammadi Zaki · Zhize Li · Guan-Horng Liu · Samuel Horváth · Mher Safaryan · Yoni Choukroun · Kumar Shridhar · Nabil Kahale · Jikai Jin · Pratik Kumar Jawanpuria · Gaurav Kumar Yadav · Kazuki Koyama · Junyoung Kim · Xiao Li · Saugata Purkayastha · Adil Salim · Dighanchal Banerjee · Peter Richtarik · Lakshman Mahto · Tian Ye · Bamdev Mishra · Huikang Liu · Jiajie Zhu -
2020 : Contributed talks in Session 1 (Zoom) »
Sebastian Stich · Laurent Condat · Zhize Li · Ohad Shamir · Tiffany Vlaar · Mohammadi Zaki -
2020 : Contributed Video: Can We Find Near-Approximately-Stationary Points of Nonsmooth Nonconvex Functions?, Ohad Shamir »
Ohad Shamir -
2020 Poster: The Implications of Local Correlation on Learning Some Deep Functions »
Eran Malach · Shai Shalev-Shwartz -
2020 Poster: Neural Networks with Small Weights and Depth-Separation Barriers »
Gal Vardi · Ohad Shamir -
2019 Poster: Is Deeper Better only when Shallow is Good? »
Eran Malach · Shai Shalev-Shwartz -
2019 Poster: On the Power and Limitations of Random Features for Understanding Neural Networks »
Gilad Yehudai · Ohad Shamir -
2018 Poster: Are ResNets Provably Better than Linear Predictors? »
Ohad Shamir -
2018 Poster: Global Non-convex Optimization with Discretized Diffusions »
Murat Erdogdu · Lester Mackey · Ohad Shamir -
2017 Poster: Decoupling "when to update" from "how to update" »
Eran Malach · Shai Shalev-Shwartz -
2016 Poster: Online Pricing with Strategic and Patient Buyers »
Michal Feldman · Tomer Koren · Roi Livni · Yishay Mansour · Aviv Zohar -
2016 Poster: Dimension-Free Iteration Complexity of Finite Sum Optimization Problems »
Yossi Arjevani · Ohad Shamir -
2016 Poster: Without-Replacement Sampling for Stochastic Gradient Methods »
Ohad Shamir -
2016 Oral: Without-Replacement Sampling for Stochastic Gradient Methods »
Ohad Shamir -
2016 Poster: Learning a Metric Embedding for Face Recognition using the Multibatch Method »
Oren Tadmor · Tal Rosenwein · Shai Shalev-Shwartz · Yonatan Wexler · Amnon Shashua -
2015 Poster: Beyond Convexity: Stochastic Quasi-Convex Optimization »
Elad Hazan · Kfir Y. Levy · Shai Shalev-Shwartz -
2015 Poster: Communication Complexity of Distributed Convex Learning and Optimization »
Yossi Arjevani · Ohad Shamir -
2014 Poster: Fundamental Limits of Online and Distributed Algorithms for Statistical Learning and Estimation »
Ohad Shamir -
2013 Poster: Online Learning with Switching Costs and Other Adaptive Adversaries »
Nicolò Cesa-Bianchi · Ofer Dekel · Ohad Shamir -
2013 Poster: More data speeds up training time in learning halfspaces over sparse vectors »
Amit Daniely · Nati Linial · Shai Shalev-Shwartz -
2013 Spotlight: More data speeds up training time in learning halfspaces over sparse vectors »
Amit Daniely · Nati Linial · Shai Shalev-Shwartz -
2013 Poster: Accelerated Mini-Batch Stochastic Dual Coordinate Ascent »
Shai Shalev-Shwartz · Tong Zhang -
2012 Poster: Multiclass Learning Approaches: A Theoretical Comparison with Implications »
Amit Daniely · Sivan Sabato · Shai Shalev-Shwartz -
2012 Spotlight: Multiclass Learning Approaches: A Theoretical Comparison with Implications »
Amit Daniely · Sivan Sabato · Shai Shalev-Shwartz -
2012 Poster: Learning Halfspaces with the Zero-One Loss: Time-Accuracy Tradeoffs »
Aharon Birnbaum · Shai Shalev-Shwartz -
2011 Poster: ShareBoost: Efficient multiclass learning with feature sharing »
Shai Shalev-Shwartz · Yonatan Wexler · Amnon Shashua -
2011 Session: Spotlight Session 4 »
Shai Shalev-Shwartz -
2011 Session: Oral Session 4 »
Shai Shalev-Shwartz -
2008 Poster: Fast Rates for Regularized Objectives »
Karthik Sridharan · Shai Shalev-Shwartz · Nati Srebro -
2008 Poster: Mind the Duality Gap: Logarithmic regret algorithms for online optimization »
Shai Shalev-Shwartz · Sham M Kakade -
2008 Spotlight: Mind the Duality Gap: Logarithmic regret algorithms for online optimization »
Shai Shalev-Shwartz · Sham M Kakade -
2006 Poster: Online Classification for Complex Problems Using Simultaneous Projections »
Yonatan Amit · Shai Shalev-Shwartz · Yoram Singer -
2006 Poster: Convex Repeated Games and Fenchel Duality »
Shai Shalev-Shwartz · Yoram Singer -
2006 Spotlight: Convex Repeated Games and Fenchel Duality »
Shai Shalev-Shwartz · Yoram Singer