Skip to yearly menu bar Skip to main content


Poster

An Architecture for Deep, Hierarchical Generative Models

Philip Bachman

Area 5+6+7+8 #12

Keywords: [ (Other) Probabilistic Models and Methods ] [ (Application) Computer Vision ] [ (Other) Unsupervised Learning Methods ] [ Deep Learning or Neural Networks ]


Abstract:

We present an architecture which lets us train deep, directed generative models with many layers of latent variables. We include deterministic paths between all latent variables and the generated output, and provide a richer set of connections between computations for inference and generation, which enables more effective communication of information throughout the model during training. To improve performance on natural images, we incorporate a lightweight autoregressive model in the reconstruction distribution. These techniques permit end-to-end training of models with 10+ layers of latent variables. Experiments show that our approach achieves state-of-the-art performance on standard image modelling benchmarks, can expose latent class structure in the absence of label information, and can provide convincing imputations of occluded regions in natural images.

Live content is unavailable. Log in and register to view live content