Skip to yearly menu bar Skip to main content


Poster

BIVA: A Very Deep Hierarchy of Latent Variables for Generative Modeling

Lars Maaløe · Marco Fraccaro · Valentin Liévin · Ole Winther

East Exhibition Hall B, C #110

Keywords: [ Generative Models ] [ Deep Learning ] [ Algorithms -> Unsupervised Learning; Probabilistic Methods -> Hierarchical Models; Probabilistic Methods ] [ Latent Variable Mod ]


Abstract:

With the introduction of the variational autoencoder (VAE), probabilistic latent variable models have received renewed attention as powerful generative models. However, their performance in terms of test likelihood and quality of generated samples has been surpassed by autoregressive models without stochastic units. Furthermore, flow-based models have recently been shown to be an attractive alternative that scales well to high-dimensional data. In this paper we close the performance gap by constructing VAE models that can effectively utilize a deep hierarchy of stochastic variables and model complex covariance structures. We introduce the Bidirectional-Inference Variational Autoencoder (BIVA), characterized by a skip-connected generative model and an inference network formed by a bidirectional stochastic inference path. We show that BIVA reaches state-of-the-art test likelihoods, generates sharp and coherent natural images, and uses the hierarchy of latent variables to capture different aspects of the data distribution. We observe that BIVA, in contrast to recent results, can be used for anomaly detection. We attribute this to the hierarchy of latent variables which is able to extract high-level semantic features. Finally, we extend BIVA to semi-supervised classification tasks and show that it performs comparably to state-of-the-art results by generative adversarial networks.

Live content is unavailable. Log in and register to view live content