Timezone: »
Motivated by the problem of sampling from ill-conditioned log-concave distributions, we give a clean non-asymptotic convergence analysis of mirror-Langevin diffusions as introduced in Zhang et al. (2020). As a special case of this framework, we propose a class of diffusions called Newton-Langevin diffusions and prove that they converge to stationarity exponentially fast with a rate which not only is dimension-free, but also has no dependence on the target distribution. We give an application of this result to the problem of sampling from the uniform distribution on a convex body using a strategy inspired by interior-point methods. Our general approach follows the recent trend of linking sampling and optimization and highlights the role of the chi-squared divergence. In particular, it yields new results on the convergence of the vanilla Langevin diffusion in Wasserstein distance.
Author Information
Sinho Chewi (Massachusetts Institute of Technology)
Thibaut Le Gouic (Massachusetts Institute of Technology)
Chen Lu (Massachusetts Institute of Technology)
Tyler Maunu (Massachusetts Institute of Technology)
Philippe Rigollet (MIT)
Austin Stromme (MIT)
More from the Same Authors
-
2021 Spotlight: Averaging on the Bures-Wasserstein manifold: dimension-free convergence of gradient descent »
Jason Altschuler · Sinho Chewi · Patrik Robert Gerber · Austin Stromme -
2022 : Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions »
Sitan Chen · Sinho Chewi · Jerry Li · Yuanzhi Li · Adil Salim · Anru Zhang -
2023 Poster: The emergence of clusters in self-attention dynamics »
Borjan Geshkovski · Cyril Letrouit · Yury Polyanskiy · Philippe Rigollet -
2023 Poster: Learning threshold neurons via edge of stability »
Kwangjun Ahn · Sebastien Bubeck · Sinho Chewi · Yin Tat Lee · Felipe Suarez · Yi Zhang -
2023 Poster: The probability flow ODE is provably fast »
Sitan Chen · Sinho Chewi · Holden Lee · Yuanzhi Li · Jianfeng Lu · Adil Salim -
2022 Poster: Variational inference via Wasserstein gradient flows »
Marc Lambert · Sinho Chewi · Francis Bach · Silvère Bonnabel · Philippe Rigollet -
2022 Poster: GULP: a prediction-based metric between representations »
Enric Boix-Adsera · Hannah Lawrence · George Stepaniants · Philippe Rigollet -
2021 Poster: Averaging on the Bures-Wasserstein manifold: dimension-free convergence of gradient descent »
Jason Altschuler · Sinho Chewi · Patrik Robert Gerber · Austin Stromme -
2021 Poster: Efficient constrained sampling via the mirror-Langevin algorithm »
Kwangjun Ahn · Sinho Chewi -
2020 Poster: SVGD as a kernelized Wasserstein gradient flow of the chi-squared divergence »
Sinho Chewi · Thibaut Le Gouic · Chen Lu · Tyler Maunu · Philippe Rigollet -
2019 Poster: Power analysis of knockoff filters for correlated designs »
Jingbo Liu · Philippe Rigollet -
2017 Poster: Near-linear time approximation algorithms for optimal transport via Sinkhorn iteration »
Jason Altschuler · Jonathan Niles-Weed · Philippe Rigollet -
2017 Spotlight: Near-linear time approximation algorithms for optimal transport via Sinkhorn iteration »
Jason Altschuler · Jonathan Niles-Weed · Philippe Rigollet