Timezone: »
Poster
Sinkhorn Natural Gradient for Generative Models
Zebang Shen · Zhenfu Wang · Alejandro Ribeiro · Hamed Hassani
We consider the problem of minimizing a functional over a parametric family of probability measures, where the parameterization is characterized via a push-forward structure.
An important application of this problem is in training generative adversarial networks.
In this regard, we propose a novel Sinkhorn Natural Gradient (SiNG) algorithm which acts as a steepest descent method on the probability space endowed with the Sinkhorn divergence.
We show that the Sinkhorn information matrix (SIM), a key component of SiNG, has an explicit expression and can be evaluated accurately in complexity that scales logarithmically with respect to the desired accuracy. This is in sharp contrast to existing natural gradient methods that can only be carried out approximately.
Moreover, in practical applications when only Monte-Carlo type integration is available, we design an empirical estimator for SIM and provide the stability analysis.
In our experiments, we quantitatively compare SiNG with state-of-the-art SGD-type solvers on generative tasks to demonstrate its efficiency and efficacy of our method.
Author Information
Zebang Shen (University of Pennsylvania)
Zhenfu Wang (Peking University)
Alejandro Ribeiro (University of Pennsylvania)
Hamed Hassani (UPenn)
Related Events (a corresponding poster, oral, or spotlight)
-
2020 Spotlight: Sinkhorn Natural Gradient for Generative Models »
Fri. Dec 11th 03:10 -- 03:20 AM Room Orals & Spotlights: Neuroscience/Probabilistic
More from the Same Authors
-
2021 : State Augmented Constrained Reinforcement Learning: Overcoming the Limitations of Learning with Rewards »
Miguel Calvo-Fullana · Santiago Paternain · Alejandro Ribeiro -
2022 : Convolutional Neural Networks on Manifolds: From Graphs and Back »
Zhiyang Wang · Luana Ruiz · Alejandro Ribeiro -
2022 Poster: Collaborative Learning of Discrete Distributions under Heterogeneity and Communication Constraints »
Xinmeng Huang · Donghwan Lee · Edgar Dobriban · Hamed Hassani -
2022 Poster: A Lagrangian Duality Approach to Active Learning »
Juan Elenter · Navid Naderializadeh · Alejandro Ribeiro -
2022 Poster: Probable Domain Generalization via Quantile Risk Minimization »
Cian Eastwood · Alexander Robey · Shashank Singh · Julius von Kügelgen · Hamed Hassani · George J. Pappas · Bernhard Schölkopf -
2022 Poster: FedAvg with Fine Tuning: Local Updates Lead to Representation Learning »
Liam Collins · Hamed Hassani · Aryan Mokhtari · Sanjay Shakkottai -
2022 Poster: coVariance Neural Networks »
Saurabh Sihag · Gonzalo Mateos · Corey McMillan · Alejandro Ribeiro -
2022 Poster: Collaborative Linear Bandits with Adversarial Agents: Near-Optimal Regret Bounds »
Aritra Mitra · Arman Adibi · George J. Pappas · Hamed Hassani -
2021 Poster: Adversarial Robustness with Semi-Infinite Constrained Learning »
Alexander Robey · Luiz Chamon · George J. Pappas · Hamed Hassani · Alejandro Ribeiro -
2020 Poster: Sinkhorn Barycenter via Functional Gradient Descent »
Zebang Shen · Zhenfu Wang · Alejandro Ribeiro · Hamed Hassani -
2020 Session: Orals & Spotlights Track 32: Optimization »
Hamed Hassani · Jeffrey A Bilmes -
2020 Poster: Graphon Neural Networks and the Transferability of Graph Neural Networks »
Luana Ruiz · Luiz Chamon · Alejandro Ribeiro -
2020 Poster: Submodular Meta-Learning »
Arman Adibi · Aryan Mokhtari · Hamed Hassani -
2020 Poster: Probably Approximately Correct Constrained Learning »
Luiz Chamon · Alejandro Ribeiro -
2019 : Poster and Coffee Break 1 »
Aaron Sidford · Aditya Mahajan · Alejandro Ribeiro · Alex Lewandowski · Ali H Sayed · Ambuj Tewari · Angelika Steger · Anima Anandkumar · Asier Mujika · Hilbert J Kappen · Bolei Zhou · Byron Boots · Chelsea Finn · Chen-Yu Wei · Chi Jin · Ching-An Cheng · Christina Yu · Clement Gehring · Craig Boutilier · Dahua Lin · Daniel McNamee · Daniel Russo · David Brandfonbrener · Denny Zhou · Devesh Jha · Diego Romeres · Doina Precup · Dominik Thalmeier · Eduard Gorbunov · Elad Hazan · Elena Smirnova · Elvis Dohmatob · Emma Brunskill · Enrique Munoz de Cote · Ethan Waldie · Florian Meier · Florian Schaefer · Ge Liu · Gergely Neu · Haim Kaplan · Hao Sun · Hengshuai Yao · Jalaj Bhandari · James A Preiss · Jayakumar Subramanian · Jiajin Li · Jieping Ye · Jimmy Smith · Joan Bas Serrano · Joan Bruna · John Langford · Jonathan Lee · Jose A. Arjona-Medina · Kaiqing Zhang · Karan Singh · Yuping Luo · Zafarali Ahmed · Zaiwei Chen · Zhaoran Wang · Zhizhong Li · Zhuoran Yang · Ziping Xu · Ziyang Tang · Yi Mao · David Brandfonbrener · Shirli Di-Castro · Riashat Islam · Zuyue Fu · Abhishek Naik · Saurabh Kumar · Benjamin Petit · Angeliki Kamoutsi · Simone Totaro · Arvind Raghunathan · Rui Wu · Donghwan Lee · Dongsheng Ding · Alec Koppel · Hao Sun · Christian Tjandraatmadja · Mahdi Karami · Jincheng Mei · Chenjun Xiao · Junfeng Wen · Zichen Zhang · Ross Goroshin · Mohammad Pezeshki · Jiaqi Zhai · Philip Amortila · Shuo Huang · Mariya Vasileva · El houcine Bergou · Adel Ahmadyan · Haoran Sun · Sheng Zhang · Lukas Gruber · Yuanhao Wang · Tetiana Parshakova -
2019 Poster: Constrained Reinforcement Learning Has Zero Duality Gap »
Santiago Paternain · Luiz Chamon · Miguel Calvo-Fullana · Alejandro Ribeiro -
2019 Poster: Online Continuous Submodular Maximization: From Full-Information to Bandit Feedback »
Mingrui Zhang · Lin Chen · Hamed Hassani · Amin Karbasi -
2019 Poster: Stochastic Continuous Greedy ++: When Upper and Lower Bounds Match »
Amin Karbasi · Hamed Hassani · Aryan Mokhtari · Zebang Shen -
2019 Poster: Stability of Graph Scattering Transforms »
Fernando Gama · Alejandro Ribeiro · Joan Bruna -
2019 Poster: Robust and Communication-Efficient Collaborative Learning »
Amirhossein Reisizadeh · Hossein Taheri · Aryan Mokhtari · Hamed Hassani · Ramtin Pedarsani -
2019 Poster: Efficient and Accurate Estimation of Lipschitz Constants for Deep Neural Networks »
Mahyar Fazlyab · Alexander Robey · Hamed Hassani · Manfred Morari · George J. Pappas -
2019 Spotlight: Efficient and Accurate Estimation of Lipschitz Constants for Deep Neural Networks »
Mahyar Fazlyab · Alexander Robey · Hamed Hassani · Manfred Morari · George J. Pappas -
2017 Poster: Approximate Supermodularity Bounds for Experimental Design »
Luiz Chamon · Alejandro Ribeiro -
2017 Poster: First-Order Adaptive Sample Size Methods to Reduce Complexity of Empirical Risk Minimization »
Aryan Mokhtari · Alejandro Ribeiro -
2017 Poster: Gradient Methods for Submodular Maximization »
Hamed Hassani · Mahdi Soltanolkotabi · Amin Karbasi -
2017 Poster: Stochastic Submodular Maximization: The Case of Coverage Functions »
Mohammad Karimi · Mario Lucic · Hamed Hassani · Andreas Krause -
2016 Poster: Adaptive Newton Method for Empirical Risk Minimization to Statistical Accuracy »
Aryan Mokhtari · Hadi Daneshmand · Aurelien Lucchi · Thomas Hofmann · Alejandro Ribeiro