Skip to yearly menu bar Skip to main content


Poster

How does Gradient Descent Learn Features --- A Local Analysis for Regularized Two-Layer Neural Networks

Mo Zhou · Rong Ge

East Exhibit Hall A-C #2302
[ ]
Wed 11 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

The ability of learning useful features is one of the major advantages of neural networks. Although recent works show that neural network can operate in a neural tangent kernel (NTK) regime that does not allow feature learning, many works also demonstrate the potential for neural networks to go beyond NTK regime and perform feature learning. Recently, a line of work highlighted the feature learning capabilities of the early stages of gradient-based training. In this paper we consider another mechanism for feature learning via gradient descent through a local convergence analysis. We show that once the loss is below a certain threshold, gradient descent with a carefully regularized objective will capture ground-truth directions. We further strengthen this local convergence analysis by incorporating early-stage feature learning analysis. Our results demonstrate that feature learning not only happens at the initial gradient steps, but can also occur towards the end of training.

Live content is unavailable. Log in and register to view live content