Skip to yearly menu bar Skip to main content


Poster

Thinking Outside the Ball: Optimal Learning with Gradient Descent for Generalized Linear Stochastic Convex Optimization

Idan Amir · Roi Livni · Nati Srebro

Hall J (level 1) #817

Keywords: [ Convex ] [ Optimization ] [ stochastic ]


Abstract: We consider linear prediction with a convex Lipschitz loss, or more generally, stochastic convex optimization problems of generalized linear form, i.e.~where each instantaneous loss is a scalar convex function of a linear function. We show that in this setting, early stopped Gradient Descent (GD), without any explicit regularization or projection, ensures excess error at most ε (compared to the best possible with unit Euclidean norm) with an optimal, up to logarithmic factors, sample complexity of O~(1/ε2) and only O~(1/ε2) iterations. This contrasts with general stochastic convex optimization, where Ω(1/ε4) iterations are needed Amir et al. 2021. The lower iteration complexity is ensured by leveraging uniform convergence rather than stability. But instead of uniform convergence in a norm ball, which we show can guarantee suboptimal learning using Θ(1/ε4) samples, we rely on uniform convergence in a distribution-dependent ball.

Chat is not available.