Abstract:
We consider nonconvex-concave minimax optimization problems of the form , where is strongly-concave in but possibly nonconvex in and is a convex and compact set. We focus on the stochastic setting, where we can only access an unbiased stochastic gradient estimate of at each iteration. This formulation includes many machine learning applications as special cases such as robust optimization and adversary training. We are interested in finding an -stationary point of the function . The most popular algorithm to solve this problem is stochastic gradient decent ascent, which requires stochastic gradient evaluations, where is the condition number. In this paper, we propose a novel method called Stochastic Recursive gradiEnt Descent Ascent (SREDA), which estimates gradients more efficiently using variance reduction. This method achieves the best known stochastic gradient complexity of , and its dependency on is optimal for this problem.
Chat is not available.