Skip to yearly menu bar Skip to main content


Poster

Sampling from Log-Concave Distributions with Infinity-Distance Guarantees

Oren Mangoubi · Nisheeth Vishnoi

Hall J (level 1) #819

Keywords: [ infinity distance ] [ Markov chains ] [ Sampling ] [ log-concave distribution ] [ differential privacy ]


Abstract: For a d-dimensional log-concave distribution π(θ)ef(θ) constrained to a convex body K, the problem of outputting samples from a distribution ν which is ε-close in infinity-distance supθK|logν(θ)π(θ)| to π arises in differentially private optimization. While sampling within total-variation distance ε of π can be done by algorithms whose runtime depends polylogarithmically on 1ε, prior algorithms for sampling in ε infinity distance have runtime bounds that depend polynomially on 1ε. We bridge this gap by presenting an algorithm that outputs a point ε-close to π in infinity distance that requires at most poly(log1ε,d) calls to a membership oracle for K and evaluation oracle for f, when f is Lipschitz. Our approach departs from prior works that construct Markov chains on a 1ε2-discretization of K to achieve a sample with ε infinity-distance error, and present a method to directly convert continuous samples from K with total-variation bounds to samples with infinity bounds. This approach also allows us to obtain an improvement on the dimension d in the running time for the problem of sampling from a log-concave distribution on polytopes K with infinity distance ε, by plugging in TV-distance running time bounds for the Dikin Walk Markov chain.

Chat is not available.