Composing graphical models with neural networks for structured representations and fast inference
Matthew Johnson · David Duvenaud · Alex Wiltschko · Ryan Adams · Sandeep R Datta
Keywords:
Deep Learning or Neural Networks
(Other) Unsupervised Learning Methods
(Other) Probabilistic Models and Methods
Variational Inference
Nonlinear Dimension Reduction and Manifold Learning
Graphical Models
2016 Poster
Abstract
We propose a general modeling and inference framework that combines the complementary strengths of probabilistic graphical models and deep learning methods. Our model family composes latent graphical models with neural network observation likelihoods. For inference, we use recognition networks to produce local evidence potentials, then combine them with the model distribution using efficient message-passing algorithms. All components are trained simultaneously with a single stochastic variational inference objective. We illustrate this framework by automatically segmenting and categorizing mouse behavior from raw depth video, and demonstrate several other example models.
Chat is not available.
Successful Page Load