Adaptive Federated Learning via Dynamical System Model
Abstract
Hyperparameter tuning is critical for stable and efficienct convergence in heterogeneous federated learning, where clients differ in computational power and data distributions are non-IID. Manual tuning is computationally expensive and scales poorly as the number of clients grows. To address this, we introduce an end-to-end adaptive framework in which both clients and the central server automatically adjust their learning rates and momentum parameters. Our approach models federated learning as a dynamical system, allowing us to leverage principles from numerical simulation and circuit theory. Through this perspective, momentum is chosen to critically damp the dynamical system for fast convergence, while client and server step sizes are adaptively adjusted to control numerical accuracy conditions in simulation. This produces an adaptive, momentum-based algorithm that avoids costly tuning while ensuring robustness to heterogeneity. Our method effectively mitigates issues such as objective inconsistency and client drift, and achieves faster, more stable convergence than state-of-the-art adaptive methods.