Statistical Inference for Decentralized Federated Learning
Abstract
This paper considers decentralized Federated Learning (FL) under het- erogeneous distributions among distributed clients or data blocks for the M- estimation. The mean squared error and consensus error across the estima- tors from different clients via the decentralized stochastic gradient descent algorithm are derived. The asymptotic normality of the PolyakâRuppert (PR) averaged estimator in the decentralized distributed setting is attained, which shows that its statistical efficiency comes at a cost as it is more restrictive on the number of clients than that in the distributed M-estimation. To overcome the restriction, a one-step estimator is proposed which permits a much larger number of clients while still achieving the same efficiency as the original PR-averaged estimator in the nondistributed setting. The confidence regions based on both the PR-averaged estimator and the proposed one-step estimator are constructed to facilitate statistical inference for decentralized FL.