Skip to yearly menu bar Skip to main content


Poster

Isolating Sources of Disentanglement in Variational Autoencoders

Tian Qi Chen · Xuechen (Chen) Li · Roger Grosse · David Duvenaud

Room 210 #58

Keywords: [ Representation Learning ] [ Generative Models ] [ Deep Autoencoders ] [ Unsupervised Learning ] [ Variational Inference ]


Abstract:

We decompose the evidence lower bound to show the existence of a term measuring the total correlation between latent variables. We use this to motivate the beta-TCVAE (Total Correlation Variational Autoencoder) algorithm, a refinement and plug-in replacement of the beta-VAE for learning disentangled representations, requiring no additional hyperparameters during training. We further propose a principled classifier-free measure of disentanglement called the mutual information gap (MIG). We perform extensive quantitative and qualitative experiments, in both restricted and non-restricted settings, and show a strong relation between total correlation and disentanglement, when the model is trained using our framework.

Live content is unavailable. Log in and register to view live content