Skip to yearly menu bar Skip to main content


Oral

Smoothing the Landscape Boosts the Signal for SGD: Optimal Sample Complexity for Learning Single Index Models

Alex Damian · Eshaan Nichani · Rong Ge · Jason Lee

Hall C2 (level 1 gate 9 south of food court)

Abstract: We focus on the task of learning a single index model σ(wx) with respect to the isotropic Gaussian distribution in d dimensions. Prior work has shown that the sample complexity of learning w is governed by the information exponent k of the link function σ, which is defined as the index of the first nonzero Hermite coefficient of σ. Ben Arous et al. (2021) showed that ndk1 samples suffice for learning w and that this is tight for online SGD. However, the CSQ lower bound for gradient based methods only shows that ndk/2 samples are necessary. In this work, we close the gap between the upper and lower bounds by showing that online SGD on a smoothed loss learns w with ndk/2 samples. We also draw connections to statistical analyses of tensor PCA and to the implicit regularization effects of minibatch SGD on empirical losses.

Chat is not available.