Timezone: »
We develop a novel method for carrying out model selection for Bayesian autoencoders (BAEs) by means of prior hyper-parameter optimization. Inspired by the common practice of type-II maximum likelihood optimization and its equivalence to Kullback-Leibler divergence minimization, we propose to optimize the distributional sliced-Wasserstein distance (DSWD) between the output of the autoencoder and the empirical data distribution. The advantages of this formulation are that we can estimate the DSWD based on samples and handle high-dimensional problems. We carry out posterior estimation of the BAE parameters via stochastic gradient Hamiltonian Monte Carlo and turn our BAE into a generative model by fitting a flexible Dirichlet mixture model in the latent space. Thanks to this approach, we obtain a powerful alternative to variational autoencoders, which are the preferred choice in modern application of autoencoders for representation learning with uncertainty. We evaluate our approach qualitatively and quantitatively using a vast experimental campaign on a number of unsupervised learning tasks and show that, in small-data regimes where priors matter, our approach provides state-of-the-art results, outperforming multiple competitive baselines.
Author Information
Ba-Hien Tran (EURECOM)
Simone Rossi (EURECOM)
Dimitrios Milios (EURECOM, Sophia Antipolis)
Pietro Michiardi (EURECOM)
Edwin Bonilla (CSIRO's Data61)
Maurizio Filippone (EURECOM)
More from the Same Authors
-
2022 Poster: All You Need is a Good Functional Prior for Bayesian Deep Learning »
Ba-Hien Tran · Simone Rossi · Dimitrios Milios · Maurizio Filippone -
2020 : Bayesian optimization by density ratio estimation »
Louis Tiao · Aaron Klein · Cedric Archambeau · Edwin Bonilla · Matthias W Seeger · Fabio Ramos -
2020 Poster: Walsh-Hadamard Variational Inference for Bayesian Deep Learning »
Simone Rossi · Sebastien Marmin · Maurizio Filippone -
2020 Poster: Quantile Propagation for Wasserstein-Approximate Gaussian Processes »
Rui Zhang · Christian Walder · Edwin Bonilla · Marian-Andrei Rizoiu · Lexing Xie -
2020 Poster: Variational Inference for Graph Convolutional Networks in the Absence of Graph Data and Adversarial Settings »
Pantelis Elinas · Edwin Bonilla · Louis Tiao -
2020 Spotlight: Variational Inference for Graph Convolutional Networks in the Absence of Graph Data and Adversarial Settings »
Pantelis Elinas · Edwin Bonilla · Louis Tiao -
2019 : Outstanding Contribution Talk: Variational Graph Convolutional Networks »
Edwin Bonilla -
2019 : Poster session »
Sebastian Farquhar · Erik Daxberger · Andreas Look · Matt Benatan · Ruiyi Zhang · Marton Havasi · Fredrik Gustafsson · James A Brofos · Nabeel Seedat · Micha Livne · Ivan Ustyuzhaninov · Adam Cobb · Felix D McGregor · Patrick McClure · Tim R. Davidson · Gaurush Hiranandani · Sanjeev Arora · Masha Itkina · Didrik Nielsen · William Harvey · Matias Valdenegro-Toro · Stefano Peluchetti · Riccardo Moriconi · Tianyu Cui · Vaclav Smidl · Taylan Cemgil · Jack Fitzsimons · He Zhao · · mariana vargas vieyra · Apratim Bhattacharyya · Rahul Sharma · Geoffroy Dubourg-Felonneau · Jonathan Warrell · Slava Voloshynovskiy · Mihaela Rosca · Jiaming Song · Andrew Ross · Homa Fashandi · Ruiqi Gao · Hooshmand Shokri Razaghi · Joshua Chang · Zhenzhong Xiao · Vanessa Boehm · Giorgio Giannone · Ranganath Krishnan · Joe Davison · Arsenii Ashukha · Jeremiah Liu · Sicong (Sheldon) Huang · Evgenii Nikishin · Sunho Park · Nilesh Ahuja · Mahesh Subedar · · Artyom Gadetsky · Jhosimar Arias Figueroa · Tim G. J. Rudner · Waseem Aslam · Adrián Csiszárik · John Moberg · Ali Hebbal · Kathrin Grosse · Pekka Marttinen · Bang An · Hlynur Jónsson · Samuel Kessler · Abhishek Kumar · Mikhail Figurnov · Omesh Tickoo · Steindor Saemundsson · Ari Heljakka · Dániel Varga · Niklas Heim · Simone Rossi · Max Laves · Waseem Gharbieh · Nicholas Roberts · Luis Armando Pérez Rey · Matthew Willetts · Prithvijit Chakrabarty · Sumedh Ghaisas · Carl Shneider · Wray Buntine · Kamil Adamczewski · Xavier Gitiaux · Suwen Lin · Hao Fu · Gunnar Rätsch · Aidan Gomez · Erik Bodin · Dinh Phung · Lennart Svensson · Juliano Tusi Amaral Laganá Pinto · Milad Alizadeh · Jianzhun Du · Kevin Murphy · Beatrix Benkő · Shashaank Vattikuti · Jonathan Gordon · Christopher Kanan · Sontje Ihler · Darin Graham · Michael Teng · Louis Kirsch · Tomas Pevny · Taras Holotyak -
2019 Poster: Structured Variational Inference in Continuous Cox Process Models »
Virginia Aglietti · Edwin Bonilla · Theodoros Damoulas · Sally Cripps -
2019 Poster: Pseudo-Extended Markov chain Monte Carlo »
Christopher Nemeth · Fredrik Lindsten · Maurizio Filippone · James Hensman -
2018 Poster: Dirichlet-based Gaussian Processes for Large-scale Calibrated Classification »
Dimitrios Milios · Raffaello Camoriano · Pietro Michiardi · Lorenzo Rosasco · Maurizio Filippone -
2015 Poster: MCMC for Variationally Sparse Gaussian Processes »
James Hensman · Alexander Matthews · Maurizio Filippone · Zoubin Ghahramani -
2015 Poster: Scalable Inference for Gaussian Process Models with Black-Box Likelihoods »
Amir Dezfouli · Edwin Bonilla -
2014 Poster: Extended and Unscented Gaussian Processes »
Daniel M Steinberg · Edwin Bonilla -
2014 Spotlight: Extended and Unscented Gaussian Processes »
Daniel M Steinberg · Edwin Bonilla -
2014 Poster: Automated Variational Inference for Gaussian Process Models »
Trung V Nguyen · Edwin Bonilla -
2013 Workshop: Machine Learning for Sustainability »
Edwin Bonilla · Thomas Dietterich · Theodoros Damoulas · Andreas Krause · Daniel Sheldon · Iadine Chades · J. Zico Kolter · Bistra Dilkina · Carla Gomes · Hugo P Simao -
2011 Poster: Improving Topic Coherence with Regularized Topic Models »
David Newman · Edwin Bonilla · Wray Buntine -
2010 Poster: Gaussian Process Preference Elicitation »
Edwin Bonilla · Shengbo Guo · Scott Sanner -
2007 Poster: Multi-task Gaussian Process Prediction »
Edwin Bonilla · Kian Ming A Chai · Chris Williams -
2007 Spotlight: Multi-task Gaussian Process Prediction »
Edwin Bonilla · Kian Ming A Chai · Chris Williams