Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Bayesian Deep Learning

Revisiting the Structured Variational Autoencoder

Yixiu Zhao · Scott Linderman


Abstract:

The Structured Variational Autoencoder (SVAE) was introduced five years ago. It presented a modeling idea---to use probabilsitic graphical models (PGMs) as priors on latent variables and deep neural networks (DNNs) to map them to observed data---as well as an inference idea---to have the recognition network output conjugate potentials to the PGM prior rather than a full posterior. While mathematically appealing, the SVAE proved impractical to use or extend, as learning required implicit differentiation of a PGM inference algorithm, and the original authors' implementation was in pure Python with no GPU or TPU support. Now, armed with the power of JAX, a software library for automatic differentiation and compilation to CPU, GPU, or TPU targets, we revisit the SVAE. We develop a modular implementation that is orders of magnitude faster than the original code and show examples in a variety of different settings, including a scientific application to animal behavior modeling. Furthermore, we extend the original model by incorporating interior potentials, which allows for more expressive PGM priors, such as the Recurrent Switching Linear Dynamical System (rSLDS). Our JAX implementation of the SVAE and its extensions open up avenues for many practical applications, extensions, and theoretical investigations.

Chat is not available.