Timezone: »
We consider the problem of lossy image compression with deep latent variable models. State-of-the-art methods build on hierarchical variational autoencoders (VAEs) and learn inference networks to predict a compressible latent representation of each data point. Drawing on the variational inference perspective on compression, we identify three approximation gaps which limit performance in the conventional approach: an amortization gap, a discretization gap, and a marginalization gap. We propose remedies for each of these three limitations based on ideas related to iterative inference, stochastic annealing for discrete optimization, and bits-back coding, resulting in the first application of bits-back coding to lossy compression. In our experiments, which include extensive baseline comparisons and ablation studies, we achieve new state-of-the-art performance on lossy image compression using an established VAE architecture, by changing only the inference method.
Author Information
Yibo Yang (University of California, Irivine)
Robert Bamler (University of Tübingen)
Stephan Mandt (University of California, Irvine)
Stephan Mandt is an Assistant Professor of Computer Science at the University of California, Irvine. From 2016 until 2018, he was a Senior Researcher and Head of the statistical machine learning group at Disney Research, first in Pittsburgh and later in Los Angeles. He held previous postdoctoral positions at Columbia University and Princeton University. Stephan holds a Ph.D. in Theoretical Physics from the University of Cologne. He is a Fellow of the German National Merit Foundation, a Kavli Fellow of the U.S. National Academy of Sciences, and was a visiting researcher at Google Brain. Stephan regularly serves as an Area Chair for NeurIPS, ICML, AAAI, and ICLR, and is a member of the Editorial Board of JMLR. His research is currently supported by NSF, DARPA, Intel, and Qualcomm.
More from the Same Authors
-
2021 : Analyzing High-Resolution Clouds and Convection using Multi-Channel VAEs »
Harshini Mangipudi · Griffin Mooers · Mike Pritchard · Tom Beucler · Stephan Mandt -
2021 : Structured Stochastic Gradient MCMC: a hybrid VI and MCMC approach »
Antonios Alexos · Alex Boyd · Stephan Mandt -
2021 Poster: Detecting and Adapting to Irregular Distribution Shifts in Bayesian Online Learning »
Aodong Li · Alex Boyd · Padhraic Smyth · Stephan Mandt -
2020 : Q/A and Discussion for ML Theory Session »
Karthik Kashinath · Mayur Mudigonda · Stephan Mandt · Rose Yu -
2020 : Stephan Mandt »
Stephan Mandt -
2020 Poster: User-Dependent Neural Sequence Models for Continuous-Time Event Data »
Alex Boyd · Robert Bamler · Stephan Mandt · Padhraic Smyth -
2019 Poster: Deep Generative Video Compression »
Salvator Lombardo · JUN HAN · Christopher Schroers · Stephan Mandt -
2017 : Introduction »
Cheng Zhang · Francisco Ruiz · Dustin Tran · James McInerney · Stephan Mandt -
2017 Workshop: Advances in Approximate Bayesian Inference »
Francisco Ruiz · Stephan Mandt · Cheng Zhang · James McInerney · James McInerney · Dustin Tran · Dustin Tran · David Blei · Max Welling · Tamara Broderick · Michalis Titsias -
2017 Poster: Perturbative Black Box Variational Inference »
Robert Bamler · Cheng Zhang · Manfred Opper · Stephan Mandt -
2016 Workshop: Advances in Approximate Bayesian Inference »
Tamara Broderick · Stephan Mandt · James McInerney · Dustin Tran · David Blei · Kevin Murphy · Andrew Gelman · Michael I Jordan -
2016 Poster: Exponential Family Embeddings »
Maja Rudolph · Francisco Ruiz · Stephan Mandt · David Blei -
2015 : Finding Sparse Features in Strongly Confounded Medial Data »
Stephan Mandt · Florian Wenzel -
2015 Workshop: Advances in Approximate Bayesian Inference »
Dustin Tran · Tamara Broderick · Stephan Mandt · James McInerney · Shakir Mohamed · Alp Kucukelbir · Matthew D. Hoffman · Neil Lawrence · David Blei -
2014 Poster: Smoothed Gradients for Stochastic Variational Inference »
Stephan Mandt · David Blei