Timezone: »
Normalizing flows, autoregressive models, variational autoencoders (VAEs), and deep energy-based models are among competing likelihood-based frameworks for deep generative learning. Among them, VAEs have the advantage of fast and tractable sampling and easy-to-access encoding networks. However, they are currently outperformed by other models such as normalizing flows and autoregressive models. While the majority of the research in VAEs is focused on the statistical challenges, we explore the orthogonal direction of carefully designing neural architectures for hierarchical VAEs. We propose Nouveau VAE (NVAE), a deep hierarchical VAE built for image generation using depth-wise separable convolutions and batch normalization. NVAE is equipped with a residual parameterization of Normal distributions and its training is stabilized by spectral regularization. We show that NVAE achieves state-of-the-art results among non-autoregressive likelihood-based models on the MNIST, CIFAR-10, CelebA 64, and CelebA HQ datasets and it provides a strong baseline on FFHQ. For example, on CIFAR-10, NVAE pushes the state-of-the-art from 2.98 to 2.91 bits per dimension, and it produces high-quality images on CelebA HQ. To the best of our knowledge, NVAE is the first successful VAE applied to natural images as large as 256x256 pixels. The source code is publicly available.
Author Information
Arash Vahdat (NVIDIA Research)
Jan Kautz (NVIDIA)
Related Events (a corresponding poster, oral, or spotlight)
-
2020 Poster: NVAE: A Deep Hierarchical Variational Autoencoder »
Fri Dec 11th 05:00 -- 07:00 AM Room Poster Session 6
More from the Same Authors
-
2020 Poster: Online Adaptation for Consistent Mesh Reconstruction in the Wild »
Xueting Li · Sifei Liu · Shalini De Mello · Kihwan Kim · Xiaolong Wang · Ming-Hsuan Yang · Jan Kautz -
2020 Poster: Convolutional Tensor-Train LSTM for Spatio-Temporal Learning »
Jiahao Su · Wonmin Byeon · Jean Kossaifi · Furong Huang · Jan Kautz · Anima Anandkumar -
2020 Poster: On the distance between two neural networks and the stability of learning »
Jeremy Bernstein · Arash Vahdat · Yisong Yue · Ming-Yu Liu -
2019 Poster: Few-shot Video-to-Video Synthesis »
Ting-Chun Wang · Ming-Yu Liu · Andrew Tao · Guilin Liu · Bryan Catanzaro · Jan Kautz -
2019 Poster: Joint-task Self-supervised Learning for Temporal Correspondence »
Xueting Li · Sifei Liu · Shalini De Mello · Xiaolong Wang · Jan Kautz · Ming-Hsuan Yang -
2019 Poster: Dancing to Music »
Hsin-Ying Lee · Xiaodong Yang · Ming-Yu Liu · Ting-Chun Wang · Yu-Ding Lu · Ming-Hsuan Yang · Jan Kautz -
2018 Poster: Context-aware Synthesis and Placement of Object Instances »
Donghoon Lee · Sifei Liu · Jinwei Gu · Ming-Yu Liu · Ming-Hsuan Yang · Jan Kautz -
2018 Poster: Video-to-Video Synthesis »
Ting-Chun Wang · Ming-Yu Liu · Jun-Yan Zhu · Guilin Liu · Andrew Tao · Jan Kautz · Bryan Catanzaro -
2018 Poster: DVAE#: Discrete Variational Autoencoders with Relaxed Boltzmann Priors »
Arash Vahdat · Evgeny Andriyash · William Macready -
2017 Poster: Unsupervised Image-to-Image Translation Networks »
Ming-Yu Liu · Thomas Breuel · Jan Kautz -
2017 Spotlight: Unsupervised Image-to-Image Translation Networks »
Ming-Yu Liu · Thomas Breuel · Jan Kautz -
2017 Poster: Learning Affinity via Spatial Propagation Networks »
Sifei Liu · Shalini De Mello · Jinwei Gu · Guangyu Zhong · Ming-Hsuan Yang · Jan Kautz