Skip to yearly menu bar Skip to main content


Poster

Bayes-optimal learning of an extensive-width neural network from quadratically many samples

Antoine Maillard · Emanuele Troiani · Simon Martin · Florent Krzakala · Lenka Zdeborová

East Exhibit Hall A-C #2203
[ ]
Fri 13 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

We consider the problem of learning a target function corresponding to a single hidden layer neural network with a quadratic activation function after the first layer and random weights. We consider the asymptotic limit where the input dimension and the network width are proportionally large. Recent work [Cui et al., 2023] established that linear regression provides Bayes-optimal test error to learn such a function when the number of available samples is only linear in the dimension. That work stressed the open challenge of theoretically analyzing the optimal test error in the more interesting regime where the number of samples is quadratic in the dimension. In this paper, we solve this challenge for quadratic activations and derive a closed-form expression for the Bayes-optimal test error. Technically, our result is enabled by establishing a link with recent works on optimal denoising of extensive rank matrices and on the ellipsoid fitting problem. We further show empirically that, in the absence of noise, randomly-initialized gradient descent seems to sample the space of weights, leading to zero training loss, and averaging over initialization leads to a test error equal to the Bayes-optimal one.

Live content is unavailable. Log in and register to view live content