Timezone: »

 
Poster
Proximal Stochastic Methods for Nonsmooth Nonconvex Finite-Sum Optimization
Sashank J. Reddi · Suvrit Sra · Barnabas Poczos · Alexander Smola

Mon Dec 05 09:00 AM -- 12:30 PM (PST) @ Area 5+6+7+8 #60

We analyze stochastic algorithms for optimizing nonconvex, nonsmooth finite-sum problems, where the nonsmooth part is convex. Surprisingly, unlike the smooth case, our knowledge of this fundamental problem is very limited. For example, it is not known whether the proximal stochastic gradient method with constant minibatch converges to a stationary point. To tackle this issue, we develop fast stochastic algorithms that provably converge to a stationary point for constant minibatches. Furthermore, using a variant of these algorithms, we obtain provably faster convergence than batch proximal gradient descent. Our results are based on the recent variance reduction techniques for convex optimization but with a novel analysis for handling nonconvex and nonsmooth functions. We also prove global linear convergence rate for an interesting subclass of nonsmooth nonconvex functions, which subsumes several recent works.

Author Information

Sashank J. Reddi (Carnegie Mellon University)
Suvrit Sra (MIT)

Suvrit Sra is a faculty member within the EECS department at MIT, where he is also a core faculty member of IDSS, LIDS, MIT-ML Group, as well as the statistics and data science center. His research spans topics in optimization, matrix theory, differential geometry, and probability theory, which he connects with machine learning --- a key focus of his research is on the theme "Optimization for Machine Learning” (http://opt-ml.org)

Barnabas Poczos (Carnegie Mellon University)
Alexander Smola (Amazon)

**AWS Machine Learning**

More from the Same Authors