Timezone: »
Poster
Sinkhorn Natural Gradient for Generative Models
Zebang Shen · Zhenfu Wang · Alejandro Ribeiro · Hamed Hassani
We consider the problem of minimizing a functional over a parametric family of probability measures, where the parameterization is characterized via a push-forward structure.
An important application of this problem is in training generative adversarial networks.
In this regard, we propose a novel Sinkhorn Natural Gradient (SiNG) algorithm which acts as a steepest descent method on the probability space endowed with the Sinkhorn divergence.
We show that the Sinkhorn information matrix (SIM), a key component of SiNG, has an explicit expression and can be evaluated accurately in complexity that scales logarithmically with respect to the desired accuracy. This is in sharp contrast to existing natural gradient methods that can only be carried out approximately.
Moreover, in practical applications when only Monte-Carlo type integration is available, we design an empirical estimator for SIM and provide the stability analysis.
In our experiments, we quantitatively compare SiNG with state-of-the-art SGD-type solvers on generative tasks to demonstrate its efficiency and efficacy of our method.
Author Information
Zebang Shen (University of Pennsylvania)
Zhenfu Wang (Peking University)
Alejandro Ribeiro (University of Pennsylvania)
Hamed Hassani (UPenn)
Related Events (a corresponding poster, oral, or spotlight)
-
2020 Spotlight: Sinkhorn Natural Gradient for Generative Models »
Fri Dec 11th 03:10 -- 03:20 AM Room Orals & Spotlights: Neuroscience/Probabilistic
More from the Same Authors
-
2020 Poster: Sinkhorn Barycenter via Functional Gradient Descent »
Zebang Shen · Zhenfu Wang · Alejandro Ribeiro · Hamed Hassani -
2020 Session: Orals & Spotlights Track 32: Optimization »
Hamed Hassani · Jeffrey A Bilmes -
2020 Poster: Graphon Neural Networks and the Transferability of Graph Neural Networks »
Luana Ruiz · Luiz Chamon · Alejandro Ribeiro -
2020 Poster: Submodular Meta-Learning »
Arman Adibi · Aryan Mokhtari · Hamed Hassani -
2020 Poster: Probably Approximately Correct Constrained Learning »
Luiz Chamon · Alejandro Ribeiro -
2019 Poster: Constrained Reinforcement Learning Has Zero Duality Gap »
Santiago Paternain · Luiz Chamon · Miguel Calvo-Fullana · Alejandro Ribeiro -
2019 Poster: Online Continuous Submodular Maximization: From Full-Information to Bandit Feedback »
Mingrui Zhang · Lin Chen · Hamed Hassani · Amin Karbasi -
2019 Poster: Stochastic Continuous Greedy ++: When Upper and Lower Bounds Match »
Amin Karbasi · Hamed Hassani · Aryan Mokhtari · Zebang Shen -
2019 Poster: Stability of Graph Scattering Transforms »
Fernando Gama · Alejandro Ribeiro · Joan Bruna -
2019 Poster: Robust and Communication-Efficient Collaborative Learning »
Amirhossein Reisizadeh · Hossein Taheri · Aryan Mokhtari · Hamed Hassani · Ramtin Pedarsani -
2019 Poster: Efficient and Accurate Estimation of Lipschitz Constants for Deep Neural Networks »
Mahyar Fazlyab · Alexander Robey · Hamed Hassani · Manfred Morari · George Pappas -
2019 Spotlight: Efficient and Accurate Estimation of Lipschitz Constants for Deep Neural Networks »
Mahyar Fazlyab · Alexander Robey · Hamed Hassani · Manfred Morari · George Pappas -
2017 Poster: Approximate Supermodularity Bounds for Experimental Design »
Luiz Chamon · Alejandro Ribeiro -
2017 Poster: First-Order Adaptive Sample Size Methods to Reduce Complexity of Empirical Risk Minimization »
Aryan Mokhtari · Alejandro Ribeiro -
2017 Poster: Gradient Methods for Submodular Maximization »
Hamed Hassani · Mahdi Soltanolkotabi · Amin Karbasi -
2017 Poster: Stochastic Submodular Maximization: The Case of Coverage Functions »
Mohammad Karimi · Mario Lucic · Hamed Hassani · Andreas Krause -
2016 Poster: Adaptive Newton Method for Empirical Risk Minimization to Statistical Accuracy »
Aryan Mokhtari · Hadi Daneshmand · Aurelien Lucchi · Thomas Hofmann · Alejandro Ribeiro