From Randomized Hamiltonian Flow to Fast Stochastic Optimization
Abstract
It was recently shown by Fu and Wibisono that Hamiltonian flow with randomized integration time and periodic velocity refreshment achieves accelerated convergence rates, and its discrete-time algorithm, randomized Hamiltonian gradient descent (RHGD), preserves these rates in the full-gradient setting. However, applying these methods to large-scale optimization remains challenging due to the high cost of computing exact gradients. In this work, we develop Randomized Hamiltonian Stochastic Gradient Descent (RHSGD), a stochastic-gradient extension of RHGD that replaces the full gradient with an unbiased stochastic estimator while retaining the probabilistic velocity refreshment. We establish convergence guarantees of RHSGD for strongly and weakly convex functions, showing that it achieves faster convergence rates than stochastic gradient descent, matching the optimal rates of existing accelerated stochastic optimization methods.