Timezone: »
In this paper, we show that although the minimizers of cross-entropy and related classification losses are off at infinity, network weights learned by gradient flow converge in direction, with an immediate corollary that network predictions, training errors, and the margin distribution also converge. This proof holds for deep homogeneous networks — a broad class of networks allowing for ReLU, max-pooling, linear, and convolutional layers — and we additionally provide empirical support not just close to the theory (e.g., the AlexNet), but also on non-homogeneous networks (e.g., the DenseNet). If the network further has locally Lipschitz gradients, we show that these gradients also converge in direction, and asymptotically align with the gradient flow path, with consequences on margin maximization, convergence of saliency maps, and a few other settings. Our analysis complements and is distinct from the well-known neural tangent and mean-field theories, and in particular makes no requirements on network width and initialization, instead merely requiring perfect classification accuracy. The proof proceeds by developing a theory of unbounded nonsmooth Kurdyka-Łojasiewicz inequalities for functions definable in an o-minimal structure, and is also applicable outside deep learning.
Author Information
Ziwei Ji (University of Illinois Urbana-Champaign)
Matus Telgarsky (UIUC)
Related Events (a corresponding poster, oral, or spotlight)
-
2020 Poster: Directional convergence and alignment in deep learning »
Thu Dec 10th 05:00 -- 07:00 PM Room Poster Session 5
More from the Same Authors
-
2018 Poster: Size-Noise Tradeoffs in Generative Networks »
Bolton Bailey · Matus Telgarsky -
2018 Spotlight: Size-Noise Tradeoffs in Generative Networks »
Bolton Bailey · Matus Telgarsky -
2017 Poster: Spectrally-normalized margin bounds for neural networks »
Peter Bartlett · Dylan J Foster · Matus Telgarsky -
2017 Spotlight: Spectrally-normalized margin bounds for neural networks »
Peter Bartlett · Dylan J Foster · Matus Telgarsky -
2014 Poster: Scalable Non-linear Learning with Adaptive Polynomial Expansions »
Alekh Agarwal · Alina Beygelzimer · Daniel Hsu · John Langford · Matus J Telgarsky -
2013 Poster: Moment-based Uniform Deviation Bounds for $k$-means and Friends »
Matus J Telgarsky · Sanjoy Dasgupta -
2011 Poster: The Fast Convergence of Boosting »
Matus J Telgarsky