Minibatch Stochastic Approximate Proximal Point Methods
Hilal Asi, Karan Chadha, Gary Cheng, John Duchi
Spotlight presentation: Orals & Spotlights Track 21: Optimization
on 2020-12-09T07:50:00-08:00 - 2020-12-09T08:00:00-08:00
on 2020-12-09T07:50:00-08:00 - 2020-12-09T08:00:00-08:00
Toggle Abstract Paper (in Proceedings / .pdf)
Abstract: We extend the Approximate-Proximal Point (aProx) family of model-based methods for solving stochastic convex optimization problems, including stochastic subgradient, proximal point, and bundle methods, to the minibatch setting. To do this, we propose two minibatched algorithms for which we prove a non-asymptotic upper bound on the rate of convergence, revealing a linear speedup in minibatch size. In contrast to standard stochastic gradient methods, these methods may have linear speedup in the minibatch setting even for non-smooth functions. Our algorithms maintain the desirable traits characteristic of the aProx family, such as robustness to initial step size choice. Additionally, we show improved convergence rates for "interpolation" problems, which (for example) gives a new parallelization strategy for alternating projections. We corroborate our theoretical results with extensive empirical testing, which demonstrates the gains provided by accurate modeling and minibatching.