Poster
Thinking Outside the Ball: Optimal Learning with Gradient Descent for Generalized Linear Stochastic Convex Optimization
Idan Amir · Roi Livni · Nati Srebro
Hall J (level 1) #817
Keywords: [ Convex ] [ Optimization ] [ stochastic ]
Abstract:
We consider linear prediction with a convex Lipschitz loss, or more generally, stochastic convex optimization problems of generalized linear form, i.e.~where each instantaneous loss is a scalar convex function of a linear function. We show that in this setting, early stopped Gradient Descent (GD), without any explicit regularization or projection, ensures excess error at most (compared to the best possible with unit Euclidean norm) with an optimal, up to logarithmic factors, sample complexity of and only iterations. This contrasts with general stochastic convex optimization, where iterations are needed Amir et al. 2021. The lower iteration complexity is ensured by leveraging uniform convergence rather than stability. But instead of uniform convergence in a norm ball, which we show can guarantee suboptimal learning using samples, we rely on uniform convergence in a distribution-dependent ball.
Chat is not available.