Skip to yearly menu bar Skip to main content


Poster

Constrained Sampling with Primal-Dual Langevin Monte Carlo

Luiz F. O. Chamon · Mohammad Reza Karimi Jaghargh · Anna Korba

[ ]
Thu 12 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

We consider the problem of sampling from a probability distribution know up to a normalization constant while satisfying a set of statistical constraints specified by the expected values of general nonlinear functions. This problem finds applications in, e.g., Bayesian inference, where it can enforce moments to evaluate counterfactual scenarios or desiderata such as prediction fairness. As opposed to support constraints, methods based on mirror maps, barriers, and penalties are not suited for this task. We therefore turn to gradient descent-ascent dynamics in Wasserstein space to put forward a discrete-time primal-dual Langevin Monte Carlo algorithm (PD-LMC) that simultaneously constrains the target distribution and samples from it. We analyze the convergence of PD-LMC under typical assumptions on the target distribution and constraints, namely (strong) convexity and log-Sobolev inequalities. To do so, we bring classical optimization arguments for saddle-point algorithms to the geometry of Wasserstein space. We illustrate the relevance and effectiveness of PD-LMC in multiple applications.

Live content is unavailable. Log in and register to view live content