Timezone: »

Step Size Matters in Deep Learning
Kamil Nar · Shankar Sastry

Tue Dec 04 01:45 PM -- 01:50 PM (PST) @ Room 220 E

Training a neural network with the gradient descent algorithm gives rise to a discrete-time nonlinear dynamical system. Consequently, behaviors that are typically observed in these systems emerge during training, such as convergence to an orbit but not to a fixed point or dependence of convergence on the initialization. Step size of the algorithm plays a critical role in these behaviors: it determines the subset of the local optima that the algorithm can converge to, and it specifies the magnitude of the oscillations if the algorithm converges to an orbit. To elucidate the effects of the step size on training of neural networks, we study the gradient descent algorithm as a discrete-time dynamical system, and by analyzing the Lyapunov stability of different solutions, we show the relationship between the step size of the algorithm and the solutions that can be obtained with this algorithm. The results provide an explanation for several phenomena observed in practice, including the deterioration in the training error with increased depth, the hardness of estimating linear mappings with large singular values, and the distinct performance of deep residual networks.

Author Information

Kamil Nar (University of California, Berkeley)
Shankar Sastry (Department of EECS, UC Berkeley)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors