Skip to yearly menu bar Skip to main content


Poster
in
Workshop: New Frontiers in Federated Learning: Privacy, Fairness, Robustness, Personalization and Data Ownership

Contribution Evaluation in Federated Learning: Examining Current Approaches

Jonathan Passerat-Palmbach · Vasilis Siomos


Abstract:

Federated Learning (FL) has seen explosive interest in cases where entities want to collaboratively train models while maintaining their privacy and governance over their data. In FL, clients have their own, private and potentially heterogeneous, data, and compute resources, and come together to train a common model without raw data ever leaving their locale. Instead, the participants, which are either end-users or institutions, contribute by sharing local model updates, which, naturally, differ in quality. Quantitatively evaluating the worth of these contributions is termed the Contribution Evaluation (CE) problem. We review current CE approaches, from the underlying mathematical framework to efficiently calculating a fair value for each client. Furthermore, we benchmark some of the most promising state-of-the-art approaches, along with a new one we introduce, on MNIST and CIFAR-10, to showcase their differences. While a small part of the overall FL system design, designing a fair and efficient CE method, and an overall incentive mechanism for participants, is tantamount to the mainstream adoption of FL.

Chat is not available.