Timezone: »
Hamiltonian Monte Carlo (HMC) is a popular Markov Chain Monte Carlo (MCMC) algorithm to sample from an unnormalized probability distribution. A leapfrog integrator is commonly used to implement HMC in practice, but its performance can be sensitive to the choice of mass matrix used therein. We develop a gradient-based algorithm that allows for the adaptation of the mass matrix by encouraging the leapfrog integrator to have high acceptance rates while also exploring all dimensions jointly. In contrast to previous work that adapt the hyperparameters of HMC using some form of expected squared jumping distance, the adaptation strategy suggested here aims to increase sampling efficiency by maximizing an approximation of the proposal entropy. We illustrate that using multiple gradients in the HMC proposal can be beneficial compared to a single gradient-step in Metropolis-adjusted Langevin proposals. Empirical evidence suggests that the adaptation method can outperform different versions of HMC schemes by adjusting the mass matrix to the geometry of the target distribution and by providing some control on the integration time.
Author Information
Marcel Hirt (University College London)
Michalis Titsias (DeepMind)
Petros Dellaportas (University College London and Athens University of Economics)
More from the Same Authors
-
2022 Spotlight: Lightning Talks 1B-4 »
Andrei Atanov · Shiqi Yang · Wanshan Li · Yongchang Hao · Ziquan Liu · Jiaxin Shi · Anton Plaksin · Jiaxiang Chen · Ziqi Pan · yaxing wang · Yuxin Liu · Stepan Martyanov · Alessandro Rinaldo · Yuhao Zhou · Li Niu · Qingyuan Yang · Andrei Filatov · Yi Xu · Liqing Zhang · Lili Mou · Ruomin Huang · Teresa Yeo · kai wang · Daren Wang · Jessica Hwang · Yuanhong Xu · Qi Qian · Hu Ding · Michalis Titsias · Shangling Jui · Ajay Sohmshetty · Lester Mackey · Joost van de Weijer · Hao Li · Amir Zamir · Xiangyang Ji · Antoni Chan · Rong Jin -
2022 Spotlight: Gradient Estimation with Discrete Stein Operators »
Jiaxin Shi · Yuhao Zhou · Jessica Hwang · Michalis Titsias · Lester Mackey -
2022 Poster: Gradient Estimation with Discrete Stein Operators »
Jiaxin Shi · Yuhao Zhou · Jessica Hwang · Michalis Titsias · Lester Mackey -
2019 Poster: Gradient-based Adaptive Markov Chain Monte Carlo »
Michalis Titsias · Petros Dellaportas -
2019 Poster: Copula-like Variational Inference »
Marcel Hirt · Petros Dellaportas · Alain Durmus -
2017 Workshop: Advances in Approximate Bayesian Inference »
Francisco Ruiz · Stephan Mandt · Cheng Zhang · James McInerney · James McInerney · Dustin Tran · Dustin Tran · David Blei · Max Welling · Tamara Broderick · Michalis Titsias -
2016 Poster: One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities »
Michalis Titsias -
2016 Poster: The Generalized Reparameterization Gradient »
Francisco Ruiz · Michalis Titsias · David Blei -
2015 Poster: Local Expectation Gradients for Black Box Variational Inference »
Michalis Titsias · Miguel Lázaro-Gredilla -
2014 Poster: Hamming Ball Auxiliary Sampling for Factorial Hidden Markov Models »
Michalis Titsias · Christopher Yau -
2014 Spotlight: Hamming Ball Auxiliary Sampling for Factorial Hidden Markov Models »
Michalis Titsias · Christopher Yau -
2013 Poster: Variational Inference for Mahalanobis Distance Metrics in Gaussian Process Regression »
Michalis Titsias · Miguel Lazaro-Gredilla