Timezone: »

Assessing Generative Models via Precision and Recall
Mehdi S. M. Sajjadi · Olivier Bachem · Mario Lucic · Olivier Bousquet · Sylvain Gelly

Wed Dec 05 02:00 PM -- 04:00 PM (PST) @ Room 210 #60

Recent advances in generative modeling have led to an increased interest in the study of statistical divergences as means of model comparison. Commonly used evaluation methods, such as the Frechet Inception Distance (FID), correlate well with the perceived quality of samples and are sensitive to mode dropping. However, these metrics are unable to distinguish between different failure cases since they only yield one-dimensional scores. We propose a novel definition of precision and recall for distributions which disentangles the divergence into two separate dimensions. The proposed notion is intuitive, retains desirable properties, and naturally leads to an efficient algorithm that can be used to evaluate generative models. We relate this notion to total variation as well as to recent evaluation metrics such as Inception Score and FID. To demonstrate the practical utility of the proposed approach we perform an empirical study on several variants of Generative Adversarial Networks and Variational Autoencoders. In an extensive set of experiments we show that the proposed metric is able to disentangle the quality of generated samples from the coverage of the target distribution.

Author Information

Mehdi S. M. Sajjadi (Max Planck Institute for Intelligent Systems and ETH Center for Learning Systems)

I am a member of Michael Hirsch's Computational Imaging research group at Bernhard Schölkopf's Empirical Inference department. Additionally, I am affiliated with Hendrik Lensch at the University of Tübingen and I am an ETH Center for Learning Systems associated PhD fellow. Previously, I had been working with Ulrike von Luxburg in the theory of machine learning group at the University of Hamburg. My research interests include probabilistic and approximate algorithms, game AI, graph theory, computational photography, computer vision and machine learning along with its countless applications. During my PhD, I am focusing on creating efficient intelligent algorithms for use in image and video processing and perceptual metrics for evaluation. More generally, I am working on deep generative models. Amongst other topics, I am utilizing high level vision using deep learning on fundamental low level vision tasks. Most recently, our work with convolutional generative adversarial neural networks has reached state-of-the-art results for the task of single image super-resolution in both quantitative and qualitative benchmarks.

Olivier Bachem (Google AI (Brain team))
Mario Lucic (Google Brain)
Olivier Bousquet (Google Brain (Zurich))
Sylvain Gelly (Google Brain (Zurich))

More from the Same Authors