Skip to yearly menu bar Skip to main content


Poster

Stochastic Composite Mirror Descent: Optimal Bounds with High Probabilities

Yunwen Lei · Ke Tang

Room 210 #97

Keywords: [ Stochastic Methods ] [ Convex Optimization ] [ Learning Theory ] [ Kernel Methods ]


Abstract:

We study stochastic composite mirror descent, a class of scalable algorithms able to exploit the geometry and composite structure of a problem. We consider both convex and strongly convex objectives with non-smooth loss functions, for each of which we establish high-probability convergence rates optimal up to a logarithmic factor. We apply the derived computational error bounds to study the generalization performance of multi-pass stochastic gradient descent (SGD) in a non-parametric setting. Our high-probability generalization bounds enjoy a logarithmical dependency on the number of passes provided that the step size sequence is square-summable, which improves the existing bounds in expectation with a polynomial dependency and therefore gives a strong justification on the ability of multi-pass SGD to overcome overfitting. Our analysis removes boundedness assumptions on subgradients often imposed in the literature. Numerical results are reported to support our theoretical findings.

Live content is unavailable. Log in and register to view live content