Timezone: »
Deep Generative Networks (DGNs) with probabilistic modeling of their output and latent space are currently trained via Variational Autoencoders (VAEs). In the absence of a known analytical form for the posterior and likelihood expectation, VAEs resort to approximations, including (Amortized) Variational Inference (AVI) and Monte-Carlo sampling. We exploit the Continuous Piecewise Affine property of modern DGNs to derive their posterior and marginal distributions as well as the latter's first two moments. These findings enable us to derive an analytical Expectation-Maximization (EM) algorithm for gradient-free DGN learning. We demonstrate empirically that EM training of DGNs produces greater likelihood than VAE training. Our new framework will guide the design of new VAE AVI that better approximates the true posterior and open new avenues to apply standard statistical tools for model comparison, anomaly detection, and missing data imputation.
Author Information
Randall Balestriero (Rice University)
Sebastien PARIS (University of Toulon)
Richard Baraniuk (Rice University)
More from the Same Authors
-
2020 Workshop: Workshop on Deep Learning and Inverse Problems »
Reinhard Heckel · Paul Hand · Richard Baraniuk · Lenka Zdeborová · Soheil Feizi -
2020 Poster: MomentumRNN: Integrating Momentum into Recurrent Neural Networks »
Tan Nguyen · Richard Baraniuk · Andrea Bertozzi · Stanley Osher · Bao Wang -
2019 Workshop: Solving inverse problems with deep networks: New architectures, theoretical foundations, and applications »
Reinhard Heckel · Paul Hand · Richard Baraniuk · Joan Bruna · Alexandros Dimakis · Deanna Needell -
2019 Poster: The Geometry of Deep Networks: Power Diagram Subdivision »
Randall Balestriero · Romain Cosentino · Behnaam Aazhang · Richard Baraniuk -
2018 Workshop: Integration of Deep Learning Theories »
Richard Baraniuk · Anima Anandkumar · Stephane Mallat · Ankit Patel · nhật Hồ -
2018 Workshop: Machine Learning for Geophysical & Geochemical Signals »
Laura Pyrak-Nolte · James Rustad · Richard Baraniuk -
2017 Workshop: Advances in Modeling and Learning Interactions from Complex Data »
Gautam Dasarathy · Mladen Kolar · Richard Baraniuk -
2017 Poster: Learned D-AMP: Principled Neural Network based Compressive Image Recovery »
Chris Metzler · Ali Mousavi · Richard Baraniuk -
2016 Workshop: Machine Learning for Education »
Richard Baraniuk · Jiquan Ngiam · Christoph Studer · Phillip Grimaldi · Andrew Lan -
2016 Poster: A Probabilistic Framework for Deep Learning »
Ankit Patel · Tan Nguyen · Richard Baraniuk -
2014 Workshop: Human Propelled Machine Learning »
Richard Baraniuk · Michael Mozer · Divyanshu Vats · Christoph Studer · Andrew E Waters · Andrew Lan -
2013 Poster: When in Doubt, SWAP: High-Dimensional Sparse Recovery from Correlated Measurements »
Divyanshu Vats · Richard Baraniuk -
2011 Poster: SpaRCS: Recovering low-rank and sparse matrices from compressive measurements »
Andrew E Waters · Aswin C Sankaranarayanan · Richard Baraniuk -
2009 Workshop: Manifolds, sparsity, and structured models: When can low-dimensional geometry really help? »
Richard Baraniuk · Volkan Cevher · Mark A Davenport · Piotr Indyk · Bruno Olshausen · Michael B Wakin -
2008 Poster: Sparse Signal Recovery Using Markov Random Fields »
Volkan Cevher · Marco F Duarte · Chinmay Hegde · Richard Baraniuk -
2008 Spotlight: Sparse Signal Recovery Using Markov Random Fields »
Volkan Cevher · Marco F Duarte · Chinmay Hegde · Richard Baraniuk -
2007 Poster: Random Projections for Manifold Learning »
Chinmay Hegde · Richard Baraniuk