Timezone: »
Poster
Asymptotic Guarantees for Learning Generative Models with the Sliced-Wasserstein Distance
Kimia Nadjahi · Alain Durmus · Umut Simsekli · Roland Badeau
Thu Dec 12 05:00 PM -- 07:00 PM (PST) @ East Exhibition Hall B + C #226
Minimum expected distance estimation (MEDE) algorithms have been widely used for probabilistic models with intractable likelihood functions and they have become increasingly popular due to their use in implicit generative modeling (e.g.\ Wasserstein generative adversarial networks, Wasserstein autoencoders). Emerging from computational optimal transport, the Sliced-Wasserstein (SW) distance has become a popular choice in MEDE thanks to its simplicity and computational benefits. While several studies have reported empirical success on generative modeling with SW, the theoretical properties of such estimators have not yet been established. In this study, we investigate the asymptotic properties of estimators that are obtained by minimizing SW. We first show that convergence in SW implies weak convergence of probability measures in general Wasserstein spaces. Then we show that estimators obtained by minimizing SW (and also an approximate version of SW) are asymptotically consistent. We finally prove a central limit theorem, which characterizes the asymptotic distribution of the estimators and establish a convergence rate of $\sqrt{n}$, where $n$ denotes the number of observed data points. We illustrate the validity of our theory on both synthetic data and neural networks.
Author Information
Kimia Nadjahi (Télécom ParisTech)
Alain Durmus (ENS Paris Saclay)
Umut Simsekli (Institut Polytechnique de Paris/ University of Oxford)
Roland Badeau (Télécom ParisTech)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Spotlight: Asymptotic Guarantees for Learning Generative Models with the Sliced-Wasserstein Distance »
Fri. Dec 13th 12:05 -- 12:10 AM Room West Ballroom A + B
More from the Same Authors
-
2021 Spotlight: Fractal Structure and Generalization Properties of Stochastic Optimization Algorithms »
Alexander Camuto · George Deligiannidis · Murat Erdogdu · Mert Gurbuzbalaban · Umut Simsekli · Lingjiong Zhu -
2022 Affinity Workshop: Women in Machine Learning - Virtual »
Mariam Arab · Konstantina Palla · Sergul Aydore · Gloria Namanya · Beliz Gunel · Kimia Nadjahi · Soomin Aga Lee -
2022 Poster: Local-Global MCMC kernels: the best of both worlds »
Sergey Samsonov · Evgeny Lagutin · Marylou Gabrié · Alain Durmus · Alexey Naumov · Eric Moulines -
2022 Poster: FedPop: A Bayesian Approach for Personalised Federated Learning »
Nikita Kotelevskii · Maxime Vono · Alain Durmus · Eric Moulines -
2022 Affinity Workshop: Women in Machine Learning »
Mariam Arab · Konstantina Palla · Sergul Aydore · Gloria Namanya · Beliz Gunel · Kimia Nadjahi · Soomin Aga Lee -
2021 Poster: Heavy Tails in SGD and Compressibility of Overparametrized Neural Networks »
Melih Barsbey · Milad Sefidgaran · Murat Erdogdu · Gaël Richard · Umut Simsekli -
2021 Poster: Intrinsic Dimension, Persistent Homology and Generalization in Neural Networks »
Tolga Birdal · Aaron Lou · Leonidas Guibas · Umut Simsekli -
2021 Poster: NEO: Non Equilibrium Sampling on the Orbits of a Deterministic Transform »
Achille Thin · Yazid Janati El Idrissi · Sylvain Le Corff · Charles Ollion · Eric Moulines · Arnaud Doucet · Alain Durmus · Christian X Robert -
2021 Poster: Convergence Rates of Stochastic Gradient Descent under Infinite Noise Variance »
Hongjian Wang · Mert Gurbuzbalaban · Lingjiong Zhu · Umut Simsekli · Murat Erdogdu -
2021 Poster: Fast Approximation of the Sliced-Wasserstein Distance Using Concentration of Random Projections »
Kimia Nadjahi · Alain Durmus · Pierre E Jacob · Roland Badeau · Umut Simsekli -
2021 Poster: Fractal Structure and Generalization Properties of Stochastic Optimization Algorithms »
Alexander Camuto · George Deligiannidis · Murat Erdogdu · Mert Gurbuzbalaban · Umut Simsekli · Lingjiong Zhu -
2021 Poster: Tight High Probability Bounds for Linear Stochastic Approximation with Fixed Stepsize »
Alain Durmus · Eric Moulines · Alexey Naumov · Sergey Samsonov · Kevin Scaman · Hoi-To Wai -
2020 Poster: Statistical and Topological Properties of Sliced Probability Divergences »
Kimia Nadjahi · Alain Durmus · Lénaïc Chizat · Soheil Kolouri · Shahin Shahrampour · Umut Simsekli -
2020 Spotlight: Statistical and Topological Properties of Sliced Probability Divergences »
Kimia Nadjahi · Alain Durmus · Lénaïc Chizat · Soheil Kolouri · Shahin Shahrampour · Umut Simsekli -
2020 Poster: Quantitative Propagation of Chaos for SGD in Wide Neural Networks »
Valentin De Bortoli · Alain Durmus · Xavier Fontaine · Umut Simsekli -
2019 Poster: First Exit Time Analysis of Stochastic Gradient Descent Under Heavy-Tailed Gradient Noise »
Thanh Huy Nguyen · Umut Simsekli · Mert Gurbuzbalaban · Gaël RICHARD -
2019 Poster: Copula-like Variational Inference »
Marcel Hirt · Petros Dellaportas · Alain Durmus -
2019 Poster: Generalized Sliced Wasserstein Distances »
Soheil Kolouri · Kimia Nadjahi · Umut Simsekli · Roland Badeau · Gustavo Rohde -
2018 Poster: The promises and pitfalls of Stochastic Gradient Langevin Dynamics »
Nicolas Brosse · Alain Durmus · Eric Moulines -
2018 Poster: Bayesian Pose Graph Optimization via Bingham Distributions and Tempered Geodesic MCMC »
Tolga Birdal · Umut Simsekli · Mustafa Onur Eken · Slobodan Ilic -
2017 Poster: Learning the Morphology of Brain Signals Using Alpha-Stable Convolutional Sparse Coding »
Mainak Jas · Tom Dupré la Tour · Umut Simsekli · Alexandre Gramfort -
2016 Poster: Stochastic Gradient Richardson-Romberg Markov Chain Monte Carlo »
Alain Durmus · Umut Simsekli · Eric Moulines · Roland Badeau · Gaël RICHARD -
2011 Poster: Generalised Coupled Tensor Factorisation »
Kenan Y Yılmaz · Taylan Cemgil · Umut Simsekli