Skip to yearly menu bar Skip to main content


Poster
in
Workshop: UniReps: Unifying Representations in Neural Models

Supervising Variational Autoencoder Latent Representations with Language

Thomas Lu · Aboli Marathe · Ada Martin


Abstract:

Supervising latent representations of data is of great interest for modern multi-modal generative machine learning. In this work, we propose two new methods to use text to condition the latent representations of a VAE, and evaluate them on a novel conditional image-generation benchmark task. We find that the applied methods can be used to generate highly accurate reconstructed images through language querying with minimal compute resources. Our methods are quantitatively successful at conforming to textually-supervised attributes of an image while keeping unsupervised attributes constant. At large, we present critical observations on disentanglement between supervised and unsupervised properties of images and identify common barriers to effective disentanglement.

Chat is not available.