The convergence of GD and SGD when training mildly parameterized neural networks starting from random initialization is studied. For a broad range of models and loss functions, including the widely used square loss and cross entropy loss, we prove an ''early stage convergence'' result. We show that the loss is decreased by a significant amount in the early stage of the training, and this decreasing is fast. Furthurmore, for exponential type loss functions, and under some assumptions on the training data, we show global convergence of GD. Instead of relying on extreme over-parameterization, our study is based on a microscopic analysis of the activation patterns for the neurons, which helps us derive gradient lower bounds. The results on activation patterns, which we call ``neuron partition'', help build intuitions for understanding the behavior of neural networks' training dynamics, and may be of independent interest.
Mingze Wang (Peking University)
Chao Ma (Stanford University)
More from the Same Authors
2023 Poster: Understanding Multi-phase Optimization Dynamics and Rich Nonlinear Behaviors of ReLU Networks »
Mingze Wang · Chao Ma
2022 Poster: The alignment property of SGD noise and how it helps select flat minima: A stability analysis »
Lei Wu · Mingze Wang · Weijie Su