Timezone: »
We introduce a new theoretical framework to analyze deep learning optimization with connection to its generalization error. Existing frameworks such as mean field theory and neural tangent kernel theory for neural network optimization analysis typically require taking limit of infinite width of the network to show its global convergence. This potentially makes it difficult to directly deal with finite width network; especially in the neural tangent kernel regime, we cannot reveal favorable properties of neural networks {\it beyond kernel methods}. To realize more natural analysis, we consider a completely different approach in which we formulate the parameter training as a transportation map estimation and show its global convergence via the theory of the {\it infinite dimensional Langevin dynamics}. This enables us to analyze narrow and wide networks in a unifying manner. Moreover, we give generalization gap and excess risk bounds for the solution obtained by the dynamics. The excess risk bound achieves the so-called fast learning rate. In particular, we show an exponential convergence for a classification problem and a minimax optimal rate for a regression problem.
Author Information
Taiji Suzuki (The University of Tokyo/RIKEN-AIP)
Related Events (a corresponding poster, oral, or spotlight)
-
2020 Poster: Generalization bound of globally optimal non-convex neural network training: Transportation map estimation by infinite dimensional Langevin dynamics »
Fri Dec 11th 05:00 -- 07:00 AM Room Poster Session 6
More from the Same Authors
-
2020 Poster: Optimization and Generalization Analysis of Transduction through Gradient Boosting and Application to Multi-scale Graph Neural Networks »
Kenta Oono · Taiji Suzuki -
2018 Poster: Sample Efficient Stochastic Gradient Iterative Hard Thresholding Method for Stochastic Sparse Linear Regression with Limited Attribute Observation »
Tomoya Murata · Taiji Suzuki -
2017 Poster: Doubly Accelerated Stochastic Variance Reduced Dual Averaging Method for Regularized Empirical Risk Minimization »
Tomoya Murata · Taiji Suzuki -
2017 Poster: Trimmed Density Ratio Estimation »
Song Liu · Akiko Takeda · Taiji Suzuki · Kenji Fukumizu -
2016 Poster: Minimax Optimal Alternating Minimization for Kernel Nonparametric Tensor Learning »
Taiji Suzuki · Heishiro Kanagawa · Hayato Kobayashi · Nobuyuki Shimizu · Yukihiro Tagami -
2013 Poster: Convex Tensor Decomposition via Structured Schatten Norm Regularization »
Ryota Tomioka · Taiji Suzuki -
2012 Poster: Density-Difference Estimation »
Masashi Sugiyama · Takafumi Kanamori · Taiji Suzuki · Marthinus C du Plessis · Song Liu · Ichiro Takeuchi -
2011 Poster: Relative Density-Ratio Estimation for Robust Distribution Comparison »
Makoto Yamada · Taiji Suzuki · Takafumi Kanamori · Hirotaka Hachiya · Masashi Sugiyama -
2011 Poster: Statistical Performance of Convex Tensor Decomposition »
Ryota Tomioka · Taiji Suzuki · Kohei Hayashi · Hisashi Kashima -
2011 Poster: Unifying Framework for Fast Learning Rate of Non-Sparse Multiple Kernel Learning »
Taiji Suzuki