Skip to yearly menu bar Skip to main content


Poster

Accelerating Stochastic Gradient Descent using Predictive Variance Reduction

Rie Johnson · Tong Zhang

Harrah's Special Events Center, 2nd Floor

Abstract:

Stochastic gradient descent is popular for large scale optimization but has slow convergence asymptotically due to the inherent variance. To remedy this problem, we introduce an explicit variance reduction method for stochastic gradient descent which we call stochastic variance reduced gradient (SVRG). For smooth and strongly convex functions, we prove that this method enjoys the same fast convergence rate as those of stochastic dual coordinate ascent (SDCA) and Stochastic Average Gradient (SAG). However, our analysis is significantly simpler and more intuitive. Moreover, unlike SDCA or SAG, our method does not require the storage of gradients, and thus is more easily applicable to complex problems such as some structured prediction problems and neural network learning.

Live content is unavailable. Log in and register to view live content