We propose and study the problem of distribution-preserving lossy compression. Motivated by recent advances in extreme image compression which allow to maintain artifact-free reconstructions even at very low bitrates, we propose to optimize the rate-distortion tradeoff under the constraint that the reconstructed samples follow the distribution of the training data. The resulting compression system recovers both ends of the spectrum: On one hand, at zero bitrate it learns a generative model of the data, and at high enough bitrates it achieves perfect reconstruction. Furthermore, for intermediate bitrates it smoothly interpolates between learning a generative model of the training data and perfectly reconstructing the training samples. We study several methods to approximately solve the proposed optimization problem, including a novel combination of Wasserstein GAN and Wasserstein Autoencoder, and present an extensive theoretical and empirical characterization of the proposed compression systems.
Michael Tschannen (ETH Zurich)
Eirikur Agustsson (ETH Zurich)
I am a PhD student at the [Computer Vision Lab](http://www.vision.ee.ethz.ch) of [ETH Zurich](https://www.ethz.ch/en.html), under the supervision of [Prof. Luc Van Gool](https://scholar.google.ch/citations?user=TwMib_QAAAAJ&hl=en&oi=ao). Previously, I received a MSc degree in Electrical Engineering and Information Technology from ETH Zurich and a double BSc degree in Mathematics and Electrical Engineering from the University of Iceland. My main research interests include deep learning for data compression, regression & classification.
Mario Lucic (Google Brain)
More from the Same Authors
2020 Session: Orals & Spotlights: Deep Learning »
Graham W Taylor · Mario Lucic
2018 Poster: Assessing Generative Models via Precision and Recall »
Mehdi S. M. Sajjadi · Olivier Bachem · Mario Lucic · Olivier Bousquet · Sylvain Gelly
2018 Poster: Are GANs Created Equal? A Large-Scale Study »
Mario Lucic · Karol Kurach · Marcin Michalski · Sylvain Gelly · Olivier Bousquet
2017 Poster: Greedy Algorithms for Cone Constrained Optimization with Convergence Guarantees »
Francesco Locatello · Michael Tschannen · Gunnar Raetsch · Martin Jaggi
2017 Poster: Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations »
Eirikur Agustsson · Fabian Mentzer · Michael Tschannen · Lukas Cavigelli · Radu Timofte · Luca Benini · Luc V Gool