Timezone: »

When is the Convergence Time of Langevin Algorithms Dimension Independent? A Composite Optimization Viewpoint
Yoav S Freund · Yi-An Ma · Tong Zhang

Wed Nov 30 02:00 PM -- 04:00 PM (PST) @ Hall J #1006

There has been a surge of works bridging MCMC sampling and optimization, with a specific focus on translating non-asymptotic convergence guarantees for optimization problems into the analysis of Langevin algorithms in MCMC sampling. A conspicuous distinction between the convergence analysis of Langevin sampling and that of optimization is that all known convergence rates for Langevin algorithms depend on the dimensionality of the problem, whereas the convergence rates for optimization are dimension-free for convex problems. Whether a dimension independent convergence rate can be achieved by the Langevin algorithm is thus a long-standing open problem. This paper provides an affirmative answer to this problem for the case of either Lipschitz or smooth convex functions with normal priors. By viewing Langevin algorithm as composite optimization, we develop a new analysis technique that leads to dimension independent convergence rates for such problems.

Author Information

Yoav S Freund (University of California, San Diego)
Yi-An Ma
Tong Zhang (The Hong Kong University of Science and Technology)

More from the Same Authors