Timezone: »
Generative models produce realistic objects in many domains, including text, image, video, and audio synthesis. Most popular models—Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs)—usually employ a standard Gaussian distribution as a prior. Previous works show that the richer family of prior distributions may help to avoid the mode collapse problem in GANs and to improve the evidence lower bound in VAEs. We propose a new family of prior distributions—Tensor Ring Induced Prior (TRIP)—that packs an exponential number of Gaussians into a high-dimensional lattice with a relatively small number of parameters. We show that these priors improve Fréchet Inception Distance for GANs and Evidence Lower Bound for VAEs. We also study generative models with TRIP in the conditional generation setup with missing conditions. Altogether, we propose a novel plug-and-play framework for generative models that can be utilized in any GAN and VAE-like architectures.
Author Information
Maxim Kuznetsov (Insilico Medicine)
Daniil Polykovskiy (Insilico Medicine)
Dmitry Vetrov (Higher School of Economics, Samsung AI Center, Moscow)
Alex Zhebrak (Insilico Medicine)
More from the Same Authors
-
2022 Poster: HyperDomainNet: Universal Domain Adaptation for Generative Adversarial Networks »
Aibek Alanov · Vadim Titov · Dmitry Vetrov -
2023 Poster: Star-Shaped Denoising Diffusion Probabilistic Models »
Andrey Okhotin · Dmitry Molchanov · Arkhipkin Vladimir · Grigory Bartosh · Viktor Ohanesian · Aibek Alanov · Dmitry Vetrov -
2023 Poster: Entropic Neural Optimal Transport via Diffusion Processes »
Nikita Gushchin · Alexander Kolesov · Alexander Korotin · Dmitry Vetrov · Evgeny Burnaev -
2023 Poster: To Stay or Not to Stay in the Pre-train Basin: Insights on Ensembling in Transfer Learning »
Ildus Sadrtdinov · Dmitrii Pozdeev · Dmitry Vetrov · Ekaterina Lobacheva -
2023 Oral: Entropic Neural Optimal Transport via Diffusion Processes »
Nikita Gushchin · Alexander Kolesov · Alexander Korotin · Dmitry Vetrov · Evgeny Burnaev -
2022 Spotlight: Lightning Talks 3B-2 »
Yu Huang · Tero Karras · Maxim Kodryan · Shiau Hong Lim · Shudong Huang · Ziyu Wang · Siqiao Xue · ILYAS MALIK · Ekaterina Lobacheva · Miika Aittala · Hongjie Wu · Yuhao Zhou · Yingbin Liang · Xiaoming Shi · Jun Zhu · Maksim Nakhodnov · Timo Aila · Yazhou Ren · James Zhang · Longbo Huang · Dmitry Vetrov · Ivor Tsang · Hongyuan Mei · Samuli Laine · Zenglin Xu · Wentao Feng · Jiancheng Lv -
2022 Spotlight: HyperDomainNet: Universal Domain Adaptation for Generative Adversarial Networks »
Aibek Alanov · Vadim Titov · Dmitry Vetrov -
2022 Spotlight: Training Scale-Invariant Neural Networks on the Sphere Can Happen in Three Regimes »
Maxim Kodryan · Ekaterina Lobacheva · Maksim Nakhodnov · Dmitry Vetrov -
2022 Spotlight: Lightning Talks 3B-1 »
Tianying Ji · Tongda Xu · Giulia Denevi · Aibek Alanov · Martin Wistuba · Wei Zhang · Yuesong Shen · Massimiliano Pontil · Vadim Titov · Yan Wang · Yu Luo · Daniel Cremers · Yanjun Han · Arlind Kadra · Dailan He · Josif Grabocka · Zhengyuan Zhou · Fuchun Sun · Carlo Ciliberto · Dmitry Vetrov · Mingxuan Jing · Chenjian Gao · Aaron Flores · Tsachy Weissman · Han Gao · Fengxiang He · Kunzan Liu · Wenbing Huang · Hongwei Qin -
2022 Poster: Training Scale-Invariant Neural Networks on the Sphere Can Happen in Three Regimes »
Maxim Kodryan · Ekaterina Lobacheva · Maksim Nakhodnov · Dmitry Vetrov -
2021 Poster: Leveraging Recursive Gumbel-Max Trick for Approximate Inference in Combinatorial Spaces »
Kirill Struminsky · Artyom Gadetsky · Denis Rakitin · Danil Karpushkin · Dmitry Vetrov -
2021 Poster: On the Periodic Behavior of Neural Network Training with Batch Normalization and Weight Decay »
Ekaterina Lobacheva · Maxim Kodryan · Nadezhda Chirkova · Andrey Malinin · Dmitry Vetrov -
2020 Poster: On Power Laws in Deep Ensembles »
Ekaterina Lobacheva · Nadezhda Chirkova · Maxim Kodryan · Dmitry Vetrov -
2020 Spotlight: On Power Laws in Deep Ensembles »
Ekaterina Lobacheva · Nadezhda Chirkova · Maxim Kodryan · Dmitry Vetrov -
2019 Poster: The Implicit Metropolis-Hastings Algorithm »
Kirill Neklyudov · Evgenii Egorov · Dmitry Vetrov -
2019 Poster: Importance Weighted Hierarchical Variational Inference »
Artem Sobolev · Dmitry Vetrov -
2019 Poster: A Simple Baseline for Bayesian Uncertainty in Deep Learning »
Wesley Maddox · Pavel Izmailov · Timur Garipov · Dmitry Vetrov · Andrew Gordon Wilson -
2018 : TBC 2 »
Dmitry Vetrov -
2018 Poster: Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs »
Timur Garipov · Pavel Izmailov · Dmitrii Podoprikhin · Dmitry Vetrov · Andrew Wilson -
2018 Spotlight: Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs »
Timur Garipov · Pavel Izmailov · Dmitrii Podoprikhin · Dmitry Vetrov · Andrew Wilson -
2017 Poster: Structured Bayesian Pruning via Log-Normal Multiplicative Noise »
Kirill Neklyudov · Dmitry Molchanov · Arsenii Ashukha · Dmitry Vetrov -
2016 Poster: PerforatedCNNs: Acceleration through Elimination of Redundant Convolutions »
Mikhail Figurnov · Aizhan Ibraimova · Dmitry Vetrov · Pushmeet Kohli -
2015 Poster: M-Best-Diverse Labelings for Submodular Energies and Beyond »
Alexander Kirillov · Dmytro Shlezinger · Dmitry Vetrov · Carsten Rother · Bogdan Savchynskyy -
2015 Poster: Tensorizing Neural Networks »
Alexander Novikov · Dmitrii Podoprikhin · Anton Osokin · Dmitry Vetrov