Poster
Sampling from Log-Concave Distributions with Infinity-Distance Guarantees
Oren Mangoubi · Nisheeth Vishnoi
Hall J (level 1) #819
Keywords: [ infinity distance ] [ Markov chains ] [ Sampling ] [ log-concave distribution ] [ differential privacy ]
Abstract:
For a -dimensional log-concave distribution constrained to a convex body , the problem of outputting samples from a distribution which is -close in infinity-distance to arises in differentially private optimization. While sampling within total-variation distance of can be done by algorithms whose runtime depends polylogarithmically on , prior algorithms for sampling in infinity distance have runtime bounds that depend polynomially on . We bridge this gap by presenting an algorithm that outputs a point -close to in infinity distance that requires at most calls to a membership oracle for and evaluation oracle for , when is Lipschitz. Our approach departs from prior works that construct Markov chains on a -discretization of to achieve a sample with infinity-distance error, and present a method to directly convert continuous samples from with total-variation bounds to samples with infinity bounds. This approach also allows us to obtain an improvement on the dimension in the running time for the problem of sampling from a log-concave distribution on polytopes with infinity distance , by plugging in TV-distance running time bounds for the Dikin Walk Markov chain.
Chat is not available.