Timezone: »

 
Poster
Stochastic Variance Reduction Methods for Saddle-Point Problems
Balamurugan Palaniappan · Francis Bach

Tue Dec 09:00 AM -- 12:30 PM PST @ Area 5+6+7+8 #88 #None

We consider convex-concave saddle-point problems where the objective functions may be split in many components, and extend recent stochastic variance reduction methods (such as SVRG or SAGA) to provide the first large-scale linearly convergent algorithms for this class of problems which are common in machine learning. While the algorithmic extension is straightforward, it comes with challenges and opportunities: (a) the convex minimization analysis does not apply and we use the notion of monotone operators to prove convergence, showing in particular that the same algorithm applies to a larger class of problems, such as variational inequalities, (b) there are two notions of splits, in terms of functions, or in terms of partial derivatives, (c) the split does need to be done with convex-concave terms, (d) non-uniform sampling is key to an efficient algorithm, both in theory and practice, and (e) these incremental algorithms can be easily accelerated using a simple extension of the "catalyst" framework, leading to an algorithm which is always superior to accelerated batch algorithms.

Author Information

Balamurugan Palaniappan (INRIA)
Francis Bach (INRIA - Ecole Normale Superieure)

More from the Same Authors