Skip to yearly menu bar Skip to main content


Oral
in
Workshop: OPT 2023: Optimization for Machine Learning

Last Iterate Convergence of Popov Method for Non-monotone Stochastic Variational Inequalities

Daniil Vankov · Angelia Nedich · Lalitha Sankar


Abstract: This paper focuses on non-monotone stochastic variational inequalities (SVIs) that may not have a unique solution. A commonly used efficient algorithm to solve VIs is the Popov method, which is known to have the optimal convergence rate for VIs with Lipschitz continuous and strongly monotone operators. We introduce a broader class of structured non-monotone operators, namely *$p$-quasi-sharp* operators *$p> 0$*, which allows tractably analyzing convergence behavior of algorithms. We show that the stochastic Popov method converges \emph{almost surely} to a solution for all operators from this class under a *linear growth*. In addition, we obtain the last iterate convergence rate (in expectation) for the method under a *linear growth* condition for $2$-quasi-sharp operators. Based on our analysis, we refine the results for smooth $2$-quasi-sharp and $p$-quasi-sharp operators (on a compact set), and obtain the optimal convergence rates.

Chat is not available.