A State Space Dynamics Perspective on Differentially Private Decentralized Learning
Abstract
Fully decentralized training of machine learning models offers significant advantages in scalability, robustness, and fault tolerance. However, achieving differential privacy (DP) in such settings is challenging due to the absence of a central aggregator and varying trust assumptions among nodes. We present a novel privacy analysis of decentralized gossip-based averaging algorithms with additive node-level noise. Our main contribution is a new analytical framework based on a linear dynamical systems formulation that accurately characterizes privacy leakage across these scenarios. We illustrate our analysis with numerical results comparing various bounds and with a logistic regression experiment on MNIST image classification in a fully decentralized setting, demonstrating utility comparable to central aggregation.