Timezone: »
We introduce a gradient-based learning method to automatically adapt Markov chain Monte Carlo (MCMC) proposal distributions to intractable targets. We define a maximum entropy regularised objective function, referred to as generalised speed measure, which can be robustly optimised over the parameters of the proposal distribution by applying stochastic gradient optimisation. An advantage of our method compared to traditional adaptive MCMC methods is that the adaptation occurs even when candidate state values are rejected. This is a highly desirable property of any adaptation strategy because the adaptation starts in early iterations even if the initial proposal distribution is far from optimum. We apply the framework for learning multivariate random walk Metropolis and Metropolis-adjusted Langevin proposals with full covariance matrices, and provide empirical evidence that our method can outperform other MCMC algorithms, including Hamiltonian Monte Carlo schemes.
Author Information
Michalis Titsias (DeepMind)
Petros Dellaportas (University College London, Athens University of Economics and Alan Turing Institute)
More from the Same Authors
-
2022 : Online Continual Learning from Imbalanced Data with Kullback-Leibler-loss based replay buffer updates »
Sotirios Nikoloutsopoulos · Iordanis Koutsopoulos · Michalis Titsias -
2021 Poster: Entropy-based adaptive Hamiltonian Monte Carlo »
Marcel Hirt · Michalis Titsias · Petros Dellaportas -
2019 Poster: Copula-like Variational Inference »
Marcel Hirt · Petros Dellaportas · Alain Durmus