Timezone: »

Stochastic Gradient Descent, Weighted Sampling, and the Randomized Kaczmarz algorithm
Deanna Needell · Rachel Ward · Nati Srebro

Tue Dec 09 04:00 PM -- 08:59 PM (PST) @ Level 2, room 210D #None

We improve a recent gurantee of Bach and Moulines on the linear convergence of SGD for smooth and strongly convex objectives, reducing a quadratic dependence on the strong convexity to a linear dependence. Furthermore, we show how reweighting the sampling distribution (i.e. importance sampling) is necessary in order to further improve convergence, and obtain a linear dependence on average smoothness, dominating previous results, and more broadly discus how importance sampling for SGD can improve convergence also in other scenarios. Our results are based on a connection we make between SGD and the randomized Kaczmarz algorithm, which allows us to transfer ideas between the separate bodies of literature studying each of the two methods.

Author Information

Deanna Needell (Claremont McKenna College)
Rachel Ward (Univ. of Texas Austin)
Nati Srebro (TTI-Chicago)

More from the Same Authors