Skip to yearly menu bar Skip to main content


Smooth Games Optimization and Machine Learning

Simon Lacoste-Julien · Ioannis Mitliagkas · Gauthier Gidel · Vasilis Syrgkanis · Eva Tardos · Leon Bottou · Sebastian Nowozin

Room 512 ABEF


Advances in generative modeling and adversarial learning gave rise to a recent surge of interest in smooth two-players games, specifically in the context of learning generative adversarial networks (GANs). Solving these games raise intrinsically different challenges than the minimization tasks the machine learning community is used to. The goal of this workshop is to bring together the several communities interested in such smooth games, in order to present what is known on the topic and identify current open questions, such as how to handle the non-convexity appearing in GANs.

Background and objectives

A number of problems and applications in machine learning are formulated as games. A special class of games, smooth games, have come into the spotlight recently with the advent of GANs. In a two-players smooth game, each player attempts to minimize their differentiable cost function which depends also on the action of the other player. The dynamics of such games are distinct from the better understood dynamics of optimization problems. For example, the Jacobian of gradient descent on a smooth two-player game, can be non-symmetric and have complex eigenvalues. Recent work by ML researchers has identified these dynamics as a key challenge for efficiently solving similar problems.

A major hurdle for relevant research in the ML community is the lack of interaction with the mathematical programming and game theory communities where similar problems have been tackled in the past, yielding useful tools. While ML researchers are quite familiar with the convex optimization toolbox from mathematical programming, they are less familiar with the tools for solving games. For example, the extragradient algorithm to solve variational inequalities has been known in the mathematical programming literature for decades, however the ML community has until recently mainly appealed to gradient descent to optimize adversarial objectives.

The aim of this workshop is to provide a platform for both theoretical and applied researchers from the ML, mathematical programming and game theory community to discuss the status of our understanding on the interplay between smooth games, their applications in ML, as well existing tools and methods for dealing with them. We also encourage, and will devote time during the workshop, on work that identifies and discusses open, forward-looking problems of interest to the NIPS community.

Examples of topics of interest to the workshop are as follow:

  • Other examples of smooth games in machine learning (e.g. actor-critic models in RL).
  • Standard or novel algorithms to solve smooth games.
  • Empirical test of algorithms on GAN applications.
  • Existence and unicity results of equilibria in smooth games.
  • Can approximate equilibria have better properties than the exact ones ? [Arora 2017, Lipton and Young 1994].
  • Variational inequality algorithms [Harker and Pang 1990, Gidel et al. 2018].
  • Handling stochasticity [Hazan et al. 2017] or non-convexity [Grnarova et al. 2018] in smooth games.
  • Related topics from mathematical programming (e.g. bilevel optimization) [Pfau and Vinyals 2016].

Live content is unavailable. Log in and register to view live content

Timezone: America/Los_Angeles


Log in and register to view live content