Skip to yearly menu bar Skip to main content


Poster
in
Workshop: NeurIPS 2023 Workshop on Diffusion Models

$f$-GANs Settle Scores!

Siddarth Asokan · Nishanth Shetty · Aadithya Srikanth · Chandra Seelamantula


Abstract: Generative adversarial networks (GANs) comprise a generator, trained to learn the underlying distribution of the desired data, and a discriminator, trained to distinguish real samples from those output by the generator. A majority of GAN literature focuses on understanding the optimality of the discriminator, typically under divergence minimization losses. In this paper, we propose a unified approach to analyzing the generator optimization through variational Calculus, uncovering links to score-based diffusion models. Considering $f$-divergence-minimizing GANs, we show that the optimal generator is the one that matches the score of its output distribution with that of the data distribution. The proposed approach serves to unify score-based training and existing $f$-GAN flavors, leveraging results from normalizing flows, while also providing explanations for empirical phenomena such as the stability of non-saturating GAN losses, or the state-of-the-art performance of discriminator guidance in diffusion models.

Chat is not available.