Skip to yearly menu bar Skip to main content


Poster

Stochastic Recursive Gradient Descent Ascent for Stochastic Nonconvex-Strongly-Concave Minimax Problems

Luo Luo · Haishan Ye · Zhichao Huang · Tong Zhang

Poster Session 3 #825

Abstract: We consider nonconvex-concave minimax optimization problems of the form minxmaxyYf(x,y), where f is strongly-concave in y but possibly nonconvex in x and Y is a convex and compact set. We focus on the stochastic setting, where we can only access an unbiased stochastic gradient estimate of f at each iteration. This formulation includes many machine learning applications as special cases such as robust optimization and adversary training. We are interested in finding an O(ε)-stationary point of the function Φ()=maxyYf(,y). The most popular algorithm to solve this problem is stochastic gradient decent ascent, which requires O(κ3ε4) stochastic gradient evaluations, where κ is the condition number. In this paper, we propose a novel method called Stochastic Recursive gradiEnt Descent Ascent (SREDA), which estimates gradients more efficiently using variance reduction. This method achieves the best known stochastic gradient complexity of O(κ3ε3), and its dependency on ε is optimal for this problem.

Chat is not available.