Timezone: »

Deep Generative Models for Distribution-Preserving Lossy Compression
Michael Tschannen · Eirikur Agustsson · Mario Lucic

Thu Dec 06 07:45 AM -- 09:45 AM (PST) @ Room 210 #83

We propose and study the problem of distribution-preserving lossy compression. Motivated by recent advances in extreme image compression which allow to maintain artifact-free reconstructions even at very low bitrates, we propose to optimize the rate-distortion tradeoff under the constraint that the reconstructed samples follow the distribution of the training data. The resulting compression system recovers both ends of the spectrum: On one hand, at zero bitrate it learns a generative model of the data, and at high enough bitrates it achieves perfect reconstruction. Furthermore, for intermediate bitrates it smoothly interpolates between learning a generative model of the training data and perfectly reconstructing the training samples. We study several methods to approximately solve the proposed optimization problem, including a novel combination of Wasserstein GAN and Wasserstein Autoencoder, and present an extensive theoretical and empirical characterization of the proposed compression systems.

Author Information

Michael Tschannen (ETH Zurich)
Eirikur Agustsson (ETH Zurich)

I am a PhD student at the [Computer Vision Lab](http://www.vision.ee.ethz.ch) of [ETH Zurich](https://www.ethz.ch/en.html), under the supervision of [Prof. Luc Van Gool](https://scholar.google.ch/citations?user=TwMib_QAAAAJ&hl=en&oi=ao). Previously, I received a MSc degree in Electrical Engineering and Information Technology from ETH Zurich and a double BSc degree in Mathematics and Electrical Engineering from the University of Iceland. My main research interests include deep learning for data compression, regression & classification.

Mario Lucic (Google Brain)

More from the Same Authors