Online Statistical Inference for Proximal Stochastic Gradient Descent under Markovian Sampling
Abstract
Nonsmooth stochastic optimization has emerged as a fundamental framework for modeling complex machine learning phenomena, particularly those involving constraints. Proximal stochastic gradient descent (proximal SGD) serves as the predominant algorithm to solve it. While existing research focuses on the i.i.d. data setting, nonsmooth optimization under Markovian sampling remains largely unexplored. This work proposes an online statistical inference procedure for nonsmooth optimization under Markovian sampling using proximal SGD. We establish asymptotic normality of averaged proximal SGD iterates and introduce a random scaling inference method that constructs parameter-free pivotal statistics through appropriate normalization. Our approach enables asymptotically valid confidence intervals, and the entire inference procedure is fully online.