Skip to yearly menu bar Skip to main content


Poster

Nonparametric Density Estimation & Convergence Rates for GANs under Besov IPM Losses

Ananya Uppal · Shashank Singh · Barnabas Poczos

East Exhibition Hall B + C #243

Keywords: [ Frequent ] [ Algorithms -> Density Estimation; Deep Learning -> Adversarial Networks; Deep Learning -> Generative Models; Theory ] [ Spaces of Functions and Kernels ] [ Theory ]

award Outstanding Paper Honorable Mention
[ ]

Abstract:

We study the problem of estimating a nonparametric probability distribution under a family of losses called Besov IPMs. This family is quite large, including, for example, L^p distances, total variation distance, and generalizations of both Wasserstein (earthmover's) and Kolmogorov-Smirnov distances. For a wide variety of settings, we provide both lower and upper bounds, identifying precisely how the choice of loss function and assumptions on the data distribution interact to determine the mini-max optimal convergence rate. We also show that, in many cases, linear distribution estimates, such as the empirical distribution or kernel density estimator, cannot converge at the optimal rate. These bounds generalize, unify, or improve on several recent and classical results. Moreover, IPMs can be used to formalize a statistical model of generative adversarial networks (GANs). Thus, we show how our results imply bounds on the statistical error of a GAN, showing, for example, that, in many cases, GANs can strictly outperform the best linear estimator.

Live content is unavailable. Log in and register to view live content