Skip to yearly menu bar Skip to main content


Poster

Constrained Langevin Algorithms with L-mixing External Random Variables

Yuping Zheng · Andrew Lamperski

Hall J (level 1) #540

Keywords: [ Non-Convex Optimization ] [ Markov Chain Monte Carlo sampling ] [ L-mixing processes ] [ Langevin algorithms ] [ Gradient descent methods ] [ Non-asymptotic analysis ]


Abstract: Langevin algorithms are gradient descent methods augmented with additive noise, and are widely used in Markov Chain Monte Carlo (MCMC) sampling, optimization, and machine learning. In recent years, the non-asymptotic analysis of Langevin algorithms for non-convex learning has been extensively explored. For constrained problems with non-convex losses over a compact convex domain with IID data variables, the projected Langevin algorithm achieves a deviation of $O(T^{-1/4} (\log T)^{1/2})$ from its target distribution \cite{lamperski2021projected} in $1$-Wasserstein distance. In this paper, we obtain a deviation of $O(T^{-1/2} \log T)$ in $1$-Wasserstein distance for non-convex losses with $L$-mixing data variables and polyhedral constraints (which are not necessarily bounded). This improves on the previous bound for constrained problems and matches the best-known bound for unconstrained problems.

Chat is not available.