Timezone: »

Fast Federated Learning in the Presence of Arbitrary Device Unavailability
Xinran Gu · Kaixuan Huang · Jingzhao Zhang · Longbo Huang

Wed Dec 08 04:30 PM -- 06:00 PM (PST) @ None #None

Federated learning (FL) coordinates with numerous heterogeneous devices to collaboratively train a shared model while preserving user privacy. Despite its multiple advantages, FL faces new challenges. One challenge arises when devices drop out of the training process. In this case, the convergence of popular FL algorithms such as FedAvg is severely influenced by the straggling devices. To tackle this challenge, we study federated learning algorithms in the presence of arbitrary device unavailability and propose an algorithm named Memory-augmented Impatient Federated Averaging (MIFA). Our algorithm efficiently avoids excessive latency induced by inactive devices, and corrects the gradient bias using the memorized latest updates from them. We prove that MIFA achieves minimax optimal convergence rates on non-i.i.d. data for both strongly convex and non-convex smooth functions. We also provide an explicit characterization of the improvement over baseline algorithms through a case study, and validate the results by numerical experiments on real-world datasets.

Author Information

Xinran Gu (Tsinghua University)
Kaixuan Huang (Princeton University)
Jingzhao Zhang (MIT)
Longbo Huang (IIIS, Tsinghua Univeristy)

More from the Same Authors