Timezone: »
Understanding generalization in deep learning has been one of the major challenges in statistical learning theory over the last decade. While recent work has illustrated that the dataset and the training algorithm must be taken into account in order to obtain meaningful generalization bounds, it is still theoretically not clear which properties of the data and the algorithm determine the generalization performance. In this study, we approach this problem from a dynamical systems theory perspective and represent stochastic optimization algorithms as \emph{random iterated function systems} (IFS). Well studied in the dynamical systems literature, under mild assumptions, such IFSs can be shown to be ergodic with an invariant measure that is often supported on sets with a \emph{fractal structure}. As our main contribution, we prove that the generalization error of a stochastic optimization algorithm can be bounded based on the `complexity' of the fractal structure that underlies its invariant measure. Then, by leveraging results from dynamical systems theory, we show that the generalization error can be explicitly linked to the choice of the algorithm (e.g., stochastic gradient descent -- SGD), algorithm hyperparameters (e.g., step-size, batch-size), and the geometry of the problem (e.g., Hessian of the loss). We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden-layered neural networks) and algorithms (e.g., SGD and preconditioned variants), and obtain analytical estimates for our bound. For modern neural networks, we develop an efficient algorithm to compute the developed bound and support our theory with various experiments on neural networks.
Author Information
Alexander Camuto (University of Oxford & The Alan Turing Institute)
George Deligiannidis (Oxford)
Murat Erdogdu (University of Toronto)
Mert Gurbuzbalaban (Rutgers University)
Umut Simsekli (Inria Paris / ENS)
Lingjiong Zhu (Florida State University)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Poster: Fractal Structure and Generalization Properties of Stochastic Optimization Algorithms »
Tue. Dec 7th 04:30 -- 06:00 PM Room
More from the Same Authors
-
2021 : Certifiably Robust Variational Autoencoders »
Ben Barrett · Alexander Camuto · Matthew Willetts · Thomas Rainforth -
2021 : Certifiably Robust Variational Autoencoders »
Ben Barrett · Alexander Camuto · Matthew Willetts · Thomas Rainforth -
2021 : Certifiably Robust Variational Autoencoders »
Ben Barrett · Alexander Camuto · Matthew Willetts · Thomas Rainforth -
2022 : Neural Networks Efficiently Learn Low-Dimensional Representations with SGD »
Alireza Mousavi-Hosseini · Sejun Park · Manuela Girotti · Ioannis Mitliagkas · Murat Erdogdu -
2022 Poster: High-dimensional Asymptotics of Feature Learning: How One Gradient Step Improves the Representation »
Jimmy Ba · Murat Erdogdu · Taiji Suzuki · Zhichao Wang · Denny Wu · Greg Yang -
2022 Poster: A Continuous Time Framework for Discrete Denoising Models »
Andrew Campbell · Joe Benton · Valentin De Bortoli · Thomas Rainforth · George Deligiannidis · Arnaud Doucet -
2022 Poster: A Multi-Resolution Framework for U-Nets with Applications to Hierarchical VAEs »
Fabian Falck · Christopher Williams · Dominic Danks · George Deligiannidis · Christopher Yau · Chris C Holmes · Arnaud Doucet · Matthew Willetts -
2022 Poster: Generalization Bounds for Stochastic Gradient Descent via Localized $\varepsilon$-Covers »
Sejun Park · Umut Simsekli · Murat Erdogdu -
2021 Poster: Heavy Tails in SGD and Compressibility of Overparametrized Neural Networks »
Melih Barsbey · Milad Sefidgaran · Murat Erdogdu · Gaël Richard · Umut Simsekli -
2021 Poster: Intrinsic Dimension, Persistent Homology and Generalization in Neural Networks »
Tolga Birdal · Aaron Lou · Leonidas Guibas · Umut Simsekli -
2021 Poster: An Analysis of Constant Step Size SGD in the Non-convex Regime: Asymptotic Normality and Bias »
Lu Yu · Krishnakumar Balasubramanian · Stanislav Volgushev · Murat Erdogdu -
2021 Poster: Manipulating SGD with Data Ordering Attacks »
I Shumailov · Zakhar Shumaylov · Dmitry Kazhdan · Yiren Zhao · Nicolas Papernot · Murat Erdogdu · Ross J Anderson -
2021 Poster: On Empirical Risk Minimization with Dependent and Heavy-Tailed Data »
Abhishek Roy · Krishnakumar Balasubramanian · Murat Erdogdu -
2021 Poster: Convergence Rates of Stochastic Gradient Descent under Infinite Noise Variance »
Hongjian Wang · Mert Gurbuzbalaban · Lingjiong Zhu · Umut Simsekli · Murat Erdogdu -
2021 Poster: Fast Approximation of the Sliced-Wasserstein Distance Using Concentration of Random Projections »
Kimia Nadjahi · Alain Durmus · Pierre E Jacob · Roland Badeau · Umut Simsekli -
2020 Poster: On the Ergodicity, Bias and Asymptotic Normality of Randomized Midpoint Sampling Method »
Ye He · Krishnakumar Balasubramanian · Murat Erdogdu -
2020 Poster: Breaking Reversibility Accelerates Langevin Dynamics for Non-Convex Optimization »
Xuefeng GAO · Mert Gurbuzbalaban · Lingjiong Zhu -
2020 Poster: Explicit Regularisation in Gaussian Noise Injections »
Alexander Camuto · Matthew Willetts · Umut Simsekli · Stephen J Roberts · Chris C Holmes -
2020 Poster: Hausdorff Dimension, Heavy Tails, and Generalization in Neural Networks »
Umut Simsekli · Ozan Sener · George Deligiannidis · Murat Erdogdu -
2020 Spotlight: Hausdorff Dimension, Heavy Tails, and Generalization in Neural Networks »
Umut Simsekli · Ozan Sener · George Deligiannidis · Murat Erdogdu -
2019 Poster: Stochastic Runge-Kutta Accelerates Langevin Monte Carlo and Beyond »
Xuechen (Chen) Li · Denny Wu · Lester Mackey · Murat Erdogdu -
2019 Spotlight: Stochastic Runge-Kutta Accelerates Langevin Monte Carlo and Beyond »
Xuechen (Chen) Li · Denny Wu · Lester Mackey · Murat Erdogdu -
2018 Poster: Global Non-convex Optimization with Discretized Diffusions »
Murat Erdogdu · Lester Mackey · Ohad Shamir -
2017 Poster: Learning the Morphology of Brain Signals Using Alpha-Stable Convolutional Sparse Coding »
Mainak Jas · Tom Dupré la Tour · Umut Simsekli · Alexandre Gramfort -
2017 Poster: Robust Estimation of Neural Signals in Calcium Imaging »
Hakan Inan · Murat Erdogdu · Mark Schnitzer -
2017 Poster: When Cyclic Coordinate Descent Outperforms Randomized Coordinate Descent »
Mert Gurbuzbalaban · Asuman Ozdaglar · Pablo A Parrilo · Nuri Vanli -
2017 Spotlight: When Cyclic Coordinate Descent Outperforms Randomized Coordinate Descent »
Mert Gurbuzbalaban · Asuman Ozdaglar · Pablo A Parrilo · Nuri Vanli -
2017 Poster: Inference in Graphical Models via Semidefinite Programming Hierarchies »
Murat Erdogdu · Yash Deshpande · Andrea Montanari -
2016 Poster: Scaled Least Squares Estimator for GLMs in Large-Scale Problems »
Murat Erdogdu · Lee H Dicker · Mohsen Bayati -
2015 Poster: Convergence rates of sub-sampled Newton methods »
Murat Erdogdu · Andrea Montanari -
2015 Poster: Newton-Stein Method: A Second Order Method for GLMs via Stein's Lemma »
Murat Erdogdu -
2015 Spotlight: Newton-Stein Method: A Second Order Method for GLMs via Stein's Lemma »
Murat Erdogdu -
2013 Poster: Estimating LASSO Risk and Noise Level »
Mohsen Bayati · Murat Erdogdu · Andrea Montanari