Timezone: »
Compression is at the heart of effective representation learning. However, lossy compression is typically achieved through simple parametric models like Gaussian noise to preserve analytic tractability, and the limitations this imposes on learning are largely unexplored. Further, the Gaussian prior assumptions in models such as variational autoencoders (VAEs) provide only an upper bound on the compression rate in general. We introduce a new noise channel, Echo noise, that admits a simple, exact expression for mutual information for arbitrary input distributions. The noise is constructed in a data-driven fashion that does not require restrictive distributional assumptions. With its complex encoding mechanism and exact rate regularization, Echo leads to improved bounds on log-likelihood and dominates beta-VAEs across the achievable range of rate-distortion trade-offs. Further, we show that Echo noise can outperform flow-based methods without the need to train additional distributional transformations.
Author Information
Rob Brekelmans (University of Southern California)
Daniel Moyer (MIT CSAIL)
Aram Galstyan (USC Information Sciences Institute)
Greg Ver Steeg (USC Information Sciences Institute)
More from the Same Authors
-
2021 : Bayesian Image Reconstruction using Deep Generative Models »
Razvan Marinescu · Daniel Moyer · Polina Golland -
2022 : Federated Progressive Sparsification (Purge-Merge-Tune)+ »
Dimitris Stripelis · Umang Gupta · Greg Ver Steeg · Jose-Luis Ambite -
2022 : Bounding the Effects of Continuous Treatments for Hidden Confounders »
Myrl Marmarelis · Greg Ver Steeg · Neda Jahanshad · Aram Galstyan -
2021 : Bayesian Image Reconstruction using Deep Generative Models »
Razvan Marinescu · Daniel Moyer · Polina Golland -
2021 Poster: Information-theoretic generalization bounds for black-box learning algorithms »
Hrayr Harutyunyan · Maxim Raginsky · Greg Ver Steeg · Aram Galstyan -
2021 Poster: Hamiltonian Dynamics with Non-Newtonian Momentum for Rapid Sampling »
Greg Ver Steeg · Aram Galstyan -
2021 Poster: Implicit SVD for Graph Representation Learning »
Sami Abu-El-Haija · Hesham Mostafa · Marcel Nassar · Valentino Crespi · Greg Ver Steeg · Aram Galstyan -
2020 : Contributed Talk 4: Annealed Importance Sampling with q-Paths »
Rob Brekelmans -
2020 Workshop: Deep Learning through Information Geometry »
Pratik Chaudhari · Alexander Alemi · Varun Jog · Dhagash Mehta · Frank Nielsen · Stefano Soatto · Greg Ver Steeg -
2020 Poster: Gaussian Process Bandit Optimization of the Thermodynamic Variational Objective »
Vu Nguyen · Vaden Masrani · Rob Brekelmans · Michael A Osborne · Frank Wood -
2019 : Poster Session »
Gergely Flamich · Shashanka Ubaru · Charles Zheng · Josip Djolonga · Kristoffer Wickstrøm · Diego Granziol · Konstantinos Pitas · Jun Li · Robert Williamson · Sangwoong Yoon · Kwot Sin Lee · Julian Zilly · Linda Petrini · Ian Fischer · Zhe Dong · Alexander Alemi · Bao-Ngoc Nguyen · Rob Brekelmans · Tailin Wu · Aditya Mahajan · Alexander Li · Kirankumar Shiragur · Yair Carmon · Linara Adilova · SHIYU LIU · Bang An · Sanjeeb Dash · Oktay Gunluk · Arya Mazumdar · Mehul Motani · Julia Rosenzweig · Michael Kamp · Marton Havasi · Leighton P Barnes · Zhengqing Zhou · Yi Hao · Dylan Foster · Yuval Benjamini · Nati Srebro · Michael Tschannen · Paul Rubenstein · Sylvain Gelly · John Duchi · Aaron Sidford · Robin Ru · Stefan Zohren · Murtaza Dalal · Michael A Osborne · Stephen J Roberts · Moses Charikar · Jayakumar Subramanian · Xiaodi Fan · Max Schwarzer · Nicholas Roberts · Simon Lacoste-Julien · Vinay Prabhu · Aram Galstyan · Greg Ver Steeg · Lalitha Sankar · Yung-Kyun Noh · Gautam Dasarathy · Frank Park · Ngai-Man (Man) Cheung · Ngoc-Trung Tran · Linxiao Yang · Ben Poole · Andrea Censi · Tristan Sylvain · R Devon Hjelm · Bangjie Liu · Jose Gallego-Posada · Tyler Sypherd · Kai Yang · Jan Nikolas Morshuis -
2019 Poster: Fast structure learning with modular regularization »
Greg Ver Steeg · Hrayr Harutyunyan · Daniel Moyer · Aram Galstyan -
2019 Spotlight: Fast structure learning with modular regularization »
Greg Ver Steeg · Hrayr Harutyunyan · Daniel Moyer · Aram Galstyan -
2018 Poster: Invariant Representations without Adversarial Training »
Daniel Moyer · Shuyang Gao · Rob Brekelmans · Aram Galstyan · Greg Ver Steeg -
2017 : Coffee break and Poster Session II »
Mohamed Kane · Albert Haque · Vagelis Papalexakis · John Guibas · Peter Li · Carlos Arias · Eric Nalisnick · Padhraic Smyth · Frank Rudzicz · Xia Zhu · Theodore Willke · Noemie Elhadad · Hans Raffauf · Harini Suresh · Paroma Varma · Yisong Yue · Ognjen (Oggi) Rudovic · Luca Foschini · Syed Rameel Ahmad · Hasham ul Haq · Valerio Maggio · Giuseppe Jurman · Sonali Parbhoo · Pouya Bashivan · Jyoti Islam · Mirco Musolesi · Chris Wu · Alexander Ratner · Jared Dunnmon · Cristóbal Esteban · Aram Galstyan · Greg Ver Steeg · Hrant Khachatrian · Marc Górriz · Mihaela van der Schaar · Anton Nemchenko · Manasi Patwardhan · Tanay Tandon -
2016 Poster: Variational Information Maximization for Feature Selection »
Shuyang Gao · Greg Ver Steeg · Aram Galstyan -
2014 Poster: Discovering Structure in High-Dimensional Data Through Correlation Explanation »
Greg Ver Steeg · Aram Galstyan -
2011 Poster: Comparative Analysis of Viterbi Training and Maximum Likelihood Estimation for HMMs »
Armen Allahverdyan · Aram Galstyan