We propose a novel randomized incremental gradient algorithm, namely, VAriance-Reduced Accelerated Gradient (Varag), for finite-sum optimization. Equipped with a unified step-size policy that adjusts itself to the value of the conditional number, Varag exhibits the unified optimal rates of convergence for solving smooth convex finite-sum problems directly regardless of their strong convexity. Moreover, Varag is the first accelerated randomized incremental gradient method that benefits from the strong convexity of the data-fidelity term to achieve the optimal linear convergence. It also establishes an optimal linear rate of convergence for solving a wide class of problems only satisfying a certain error bound condition rather than strong convexity. Varag can also be extended to solve stochastic finite-sum problems.
Guanghui Lan (Georgia Tech)
Zhize Li (Tsinghua University, and KAUST)
Zhize Li is a Research Scientist at the King Abdullah University of Science and Technology (KAUST) since September 2020. He got his PhD degree in Computer Science from Tsinghua University in 2019 (Advisor: Prof. Jian Li). He was a Postdoc at KAUST (Hosted by Prof. Peter Richtárik), a visiting scholar at Duke University (Hosted by Prof. Rong Ge), and a visiting scholar at Georgia Institute of Technology (Hosted by Prof. Guanghui (George) Lan).
Yi Zhou (IBM Almaden Research Center)
More from the Same Authors
2020 Poster: A Feasible Level Proximal Point Method for Nonconvex Sparse Constrained Optimization »
Digvijay Boob · Qi Deng · Guanghui Lan · Yilin Wang
2019 Poster: SSRGD: Simple Stochastic Recursive Gradient Descent for Escaping Saddle Points »