Timezone: »
We present an architecture which lets us train deep, directed generative models with many layers of latent variables. We include deterministic paths between all latent variables and the generated output, and provide a richer set of connections between computations for inference and generation, which enables more effective communication of information throughout the model during training. To improve performance on natural images, we incorporate a lightweight autoregressive model in the reconstruction distribution. These techniques permit end-to-end training of models with 10+ layers of latent variables. Experiments show that our approach achieves state-of-the-art performance on standard image modelling benchmarks, can expose latent class structure in the absence of label information, and can provide convincing imputations of occluded regions in natural images.
Author Information
Philip Bachman (Microsoft Research)
More from the Same Authors
-
2021 Poster: Pretraining Representations for Data-Efficient Reinforcement Learning »
Max Schwarzer · Nitarshan Rajkumar · Michael Noukhovitch · Ankesh Anand · Laurent Charlin · R Devon Hjelm · Philip Bachman · Aaron Courville -
2020 Poster: Deep Reinforcement and InfoMax Learning »
Bogdan Mazoure · Remi Tachet des Combes · Thang Long Doan · Philip Bachman · R Devon Hjelm -
2019 Poster: Learning Representations by Maximizing Mutual Information Across Views »
Philip Bachman · R Devon Hjelm · William Buchwalter -
2015 Poster: Data Generation as Sequential Decision Making »
Philip Bachman · Doina Precup -
2015 Spotlight: Data Generation as Sequential Decision Making »
Philip Bachman · Doina Precup -
2014 Poster: Learning with Pseudo-Ensembles »
Philip Bachman · Ouais Alsharif · Doina Precup