Skip to yearly menu bar Skip to main content


Poster
in
Affinity Workshop: Black in AI Workshop

Practical Federated Learning: Empirical Evaluation of Federated Learning Techniques

Jonathan Mbuya · Shuochao Yao · Huzefa Rangwala


Abstract:

Federated Learning (FL) has recently emerged as a privacy-preserving method to train a deep learning model in a decentralized fashion while keeping the training data on edge devices. In addition to privacy, FL eliminates the need to send and store a lot of data on a central server, often in the cloud. Many researchers suggest various FL algorithms, each dealing with a particular weakness of previous algorithms. Unfortunately, it is not easy to compare these algorithms because each researcher (1) has different implementations, (2) tests them in different environments, and (2) uses different evaluation metrics. In this work, we implement two of the most popular FL algorithms, Federated Average (FedAvg) and Federated Proximal (FedProx), test them in the same environment, and use the same evaluation metrics on an image classification task. We constructed four experiments, each with a different percentage of stragglers. We trained ten different models for image classification for each experiment. We observed that FedAvg achieves high accuracy while taking less time to train a model than FedProx in settings with few to no stragglers. However, with a high percentage of stragglers (up to 90%), our results show that FedProx outperforms FedAvg by achieving high accuracy on average. We also noticed that FedAvg is highly unstable in environments with a high percentage of stragglers compared to FedProx. Lastly, we observed that FedProx is robust to statistical and system heterogeneity, while FedAvg is less robust regarding system heterogeneity in environments with a high percentage of stragglers.

Chat is not available.