Skip to yearly menu bar Skip to main content


Poster

Gradient flow dynamics of shallow ReLU networks for square loss and orthogonal inputs

Etienne Boursier · Loucas PILLAUD-VIVIEN · Nicolas Flammarion

Hall J (level 1) #515

Keywords: [ ReLU networks ] [ non-convex optimisation ] [ Gradient flow ] [ two-layer neural networks ] [ global convergence ] [ implicit bias ] [ variation norm ] [ Gradient Descent ]


Abstract:

The training of neural networks by gradient descent methods is a cornerstone of the deep learning revolution. Yet, despite some recent progress, a complete theory explaining its success is still missing. This article presents, for orthogonal input vectors, a precise description of the gradient flow dynamics of training one-hidden layer ReLU neural networks for the mean squared error at small initialisation. In this setting, despite non-convexity, we show that the gradient flow converges to zero loss and characterise its implicit bias towards minimum variation norm. Furthermore, some interesting phenomena are highlighted: a quantitative description of the initial alignment phenomenon and a proof that the process follows a specific saddle to saddle dynamics.

Chat is not available.