Skip to yearly menu bar Skip to main content


Poster
in
Workshop: OPT 2021: Optimization for Machine Learning

Last-Iterate Convergence of Saddle Point Optimizers via High-Resolution Differential Equations

Tatjana Chavdarova · Michael Jordan · Emmanouil Zampetakis


Abstract:

Several widely-used first-order saddle point optimization methods yield the same continuous-time ordinary differential equation (ODE) as that of the Gradient Descent Ascent (GDA) method when derived naively. However, their convergence properties are very different even in simple bilinear games. In this paper, we use a technique from fluid dynamics called High Resolution Differential Equations (HRDEs) to design ODEs that converge to saddle points. As our main result, we design an ODE that has last iterate convergence for monotone variational inequalities. To our knowledge, this is the first continuous-time dynamics shown to converge for such a general setting. We also provide an implicit discretization of our ODE and we show it has last iterate convergence at a rate O(1/\sqrt{T}), which was previously shown to be optimal Golowich et al, 2020, using only first-order smoothness of the monotone operator, in contrast to previous results that need second-order smoothness as well.

Chat is not available.