Timezone: »

Explicitly disentangling image content from translation and rotation with spatial-VAE
Tristan Bepler · Ellen Zhong · Kotaro Kelley · Edward Brignole · Bonnie Berger

Thu Dec 12 05:00 PM -- 07:00 PM (PST) @ East Exhibition Hall B + C #89

Given an image dataset, we are often interested in finding data generative factors that encode semantic content independently from pose variables such as rotation and translation. However, current disentanglement approaches do not impose any specific structure on the learned latent representations. We propose a method for explicitly disentangling image rotation and translation from other unstructured latent factors in a variational autoencoder (VAE) framework. By formulating the generative model as a function of the spatial coordinate, we make the reconstruction error differentiable with respect to latent translation and rotation parameters. This formulation allows us to train a neural network to perform approximate inference on these latent variables while explicitly constraining them to only represent rotation and translation. We demonstrate that this framework, termed spatial-VAE, effectively learns latent representations that disentangle image rotation and translation from content and improves reconstruction over standard VAEs on several benchmark datasets, including applications to modeling continuous 2-D views of proteins from single particle electron microscopy and galaxies in astronomical images.

Author Information

Tristan Bepler (MIT)
Ellen Zhong (Massachusetts Institute of Technology)
Kotaro Kelley (New York Structural Biology Center)
Edward Brignole (Massachusetts Institute of Technology)
Bonnie Berger (MIT)

More from the Same Authors