Poster
Solving a Class of Non-Convex Min-Max Games Using Iterative First Order Methods
Maher Nouiehed · Maziar Sanjabi · Tianjian Huang · Jason Lee · Meisam Razaviyayn
East Exhibition Hall B, C #205
Keywords: [ Algorithms ] [ Adversarial Learning ] [ Non-Convex Optimization ] [ Optimization ]
[
Abstract
]
Abstract:
Recent applications that arise in machine learning have surged significant interest in solving min-max saddle point games. This problem has been extensively studied in the convex-concave regime for which a global equilibrium solution can be computed efficiently. In this paper, we study the problem in the non-convex regime and show that an $\varepsilon$--first order stationary point of the game can be computed when one of the player’s objective can be optimized to global optimality efficiently. In particular, we first consider the case where the objective of one of the players satisfies the Polyak-{\L}ojasiewicz (PL) condition. For such a game, we show that a simple multi-step gradient descent-ascent algorithm finds an $\varepsilon$--first order stationary point of the problem in $\widetilde{\mathcal{O}}(\varepsilon^{-2})$ iterations. Then we show that our framework can also be applied to the case where the objective of the ``max-player" is concave. In this case, we propose a multi-step gradient descent-ascent algorithm that finds an $\varepsilon$--first order stationary point of the game in $\widetilde{\cal O}(\varepsilon^{-3.5})$ iterations, which is the best known rate in the literature. We applied our algorithm to a fair classification problem of Fashion-MNIST dataset and observed that the proposed algorithm results in smoother training and better generalization.
Live content is unavailable. Log in and register to view live content