Skip to yearly menu bar Skip to main content


Poster

The First Optimal Algorithm for Smooth and Strongly-Convex-Strongly-Concave Minimax Optimization

Dmitry Kovalev · Alexander Gasnikov

Hall J (level 1) #820

Keywords: [ saddle point problems ] [ minimax optimization ] [ Convex Optimization ] [ optimal algorithms ]


Abstract: In this paper, we revisit the smooth and strongly-convex-strongly-concave minimax optimization problem. Zhang et al. (2021) and Ibrahim et al. (2020) established the lower bound Ω(κxκylog1ϵ) on the number of gradient evaluations required to find an ϵ-accurate solution, where κx and κy are condition numbers for the strong convexity and strong concavity assumptions. However, the existing state-of-the-art methods do not match this lower bound: algorithms of Lin et al. (2020) and Wang and Li (2020) have gradient evaluation complexity O(κxκylog31ϵ) and O(κxκylog3(κxκy)log1ϵ), respectively. We fix this fundamental issue by providing the first algorithm with O(κxκylog1ϵ) gradient evaluation complexity. We design our algorithm in three steps: (i) we reformulate the original problem as a minimization problem via the pointwise conjugate function; (ii) we apply a specific variant of the proximal point algorithm to the reformulated problem; (iii) we compute the proximal operator inexactly using the optimal algorithm for operator norm reduction in monotone inclusions.

Chat is not available.