Unifying Distributed Optimization, Algorithmic Stability, and Privacy-Preserving Learning through Converse Lyapunov Theory
Abstract
Optimization algorithms are increasingly deployed in dynamic, uncertain environments where disturbances such as delays, quantization, and noise injection are inevitable to system behavior. This paper presents a systems-theoretic framework that characterizes the impact of such disturbances on algorithmic convergence. We focus on algorithms that exhibit linear convergence and leverage converse Lyapunov theorems to derive explicit bounds that quantify how bounded disturbances affect the performance of algorithms. We further demonstrate how our result can be utilized to assess the effects of disturbances on algorithmic performance in a wide variety of applications, including communication constraints in distributed learning, sensitivity analysis in machine learning generalization, and intentional noise injection for privacy. Overall, the results offer a unified way to analyze the behavior of linearly convergent algorithms in the presence of noise, disturbances, and dynamic interactions.