Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Bayesian Deep Learning

Generalization Gap in Amortized Inference

Mingtian Zhang · Peter Hayes · David Barber


Abstract:

The ability of likelihood-based probabilistic models to generalize to unseen data is central to many machine learning applications. The Variational Auto-Encoder (VAE) is a popular class of latent variable model used for many such applications including density estimation, representation learning and lossless compression. In this work, we highlight how the common use of amortized inference to scale the training of VAE models to large data sets can be a major cause of poor generalization performance. We propose a new training phase for the inference network that helps reduce over-fitting to training data. We demonstrate how the proposed scheme can improve generalization performance in the context of image modeling.

Chat is not available.