Skip to yearly menu bar Skip to main content


Poster

Improved Algorithms for Convex-Concave Minimax Optimization

Yuanhao Wang · Jian Li

Poster Session 2 #656

Abstract: This paper studies minimax optimization problems min\xmax\yf(\x,\y), where f(\x,\y) is m\x-strongly convex with respect to \x, m\y-strongly concave with respect to \y and (L\x,L\x\y,L\y)-smooth. Zhang et al. \cite{zhang2019lower} provided the following lower bound of the gradient complexity for any first-order method: Ω(L\xm\x+L\x\y2m\xm\y+L\ym\yln(1/ϵ)). This paper proposes a new algorithm and proved a gradient complexity bound of \TildeO(L\xm\x+LL\x\ym\xm\y+L\ym\yln(1/ϵ)), where L=max{L\x,L\x\y,L\y}. This improves over the best known upper bound \TildeO(\nicefracL2m\xm\yln3(1/ϵ)) by Lin et al. \cite{lin2020near}. Our bound achieves linear convergence rate and tighter dependency on condition numbers, especially when L\x\yL (i.e., the weak interaction regime). Via simple reduction, our new bound also implies improved bounds for strongly convex-concave problems and convex-concave problems. When f is quadratic, we can further improve the bound to O(L\xm\x+L\x\y2m\xm\y+L\ym\y(L2m\xm\y)o(1)ln(1/ϵ)), which matches the lower bound up to a sub-polynomial factor.

Chat is not available.