Skip to yearly menu bar Skip to main content


Poster

Non-convex Finite-Sum Optimization Via SCSG Methods

Lihua Lei · Cheng Ju · Jianbo Chen · Michael Jordan

Pacific Ballroom #157

Keywords: [ Optimization for Deep Networks ] [ Non-Convex Optimization ] [ Optimization ]


Abstract: We develop a class of algorithms, as variants of the stochastically controlled stochastic gradient (SCSG) methods , for the smooth nonconvex finite-sum optimization problem. Only assuming the smoothness of each component, the complexity of SCSG to reach a stationary point with $E \|\nabla f(x)\|^{2}\le \epsilon$ is $O(\min\{\epsilon^{-5/3}, \epsilon^{-1}n^{2/3}\})$, which strictly outperforms the stochastic gradient descent. Moreover, SCSG is never worse than the state-of-the-art methods based on variance reduction and it significantly outperforms them when the target accuracy is low. A similar acceleration is also achieved when the functions satisfy the Polyak-Lojasiewicz condition. Empirical experiments demonstrate that SCSG outperforms stochastic gradient methods on training multi-layers neural networks in terms of both training and validation loss.

Live content is unavailable. Log in and register to view live content