Timezone: »
Group fairness ensures that the outcome of machine learning (ML) based decision making systems are not biased towards a certain group of people defined by a sensitive attribute such as gender or ethnicity. Achieving group fairness in Federated Learning (FL) is challenging because mitigating bias inherently requires using the sensitive attribute values of all clients, while FL is aimed precisely at protecting privacy by not giving access to the clients’ data. As we show in this paper, this conflict between fairness and privacy in FL can be resolved by combining FL with Secure Multiparty Computation (MPC) and Differential Privacy (DP). In doing so, we propose a method for training group-fair ML models in cross device FL under complete and formal privacy guarantees, without requiring the clients to disclose their sensitive attribute values. Empirical evaluations on real world datasets demonstrate the effectiveness of our solution to train fair and accurate ML models in federated cross-device setups with privacy guarantees to the users.
Author Information
Sikha Pentyala (University of Washington, Tacoma)
Nicola Neophytou (Mila / Universite de Montreal)
Anderson Nascimento (University of Washington Tacoma)
Martine De Cock (University of Washington Tacoma)
Golnoosh Farnadi (Mila)
More from the Same Authors
-
2021 : Label Private Deep Learning Training based on Secure Multiparty Computation and Differential Privacy »
Sen Yuan · Milan Shen · Ilya Mironov · Anderson Nascimento -
2022 : Exposure Fairness in Music Recommendation »
Rebecca Salganik · Fernando Diaz · Golnoosh Farnadi -
2022 : Mitigating Online Grooming with Federated Learning »
Khaoula Chehbouni · Gilles Caporossi · Reihaneh Rabbany · Martine De Cock · Golnoosh Farnadi -
2022 : Towards Private and Fair Federated Learning »
Sikha Pentyala · Nicola Neophytou · Anderson Nascimento · Martine De Cock · Golnoosh Farnadi -
2022 : Fair Targeted Immunization with Dynamic Influence Maximization »
Nicola Neophytou · Golnoosh Farnadi -
2022 : Early Detection of Sexual Predators with Federated Learning »
Khaoula Chehbouni · Gilles Caporossi · Reihaneh Rabbany · Martine De Cock · Golnoosh Farnadi -
2023 Workshop: Algorithmic Fairness through the Lens of Time »
Awa Dieng · Miriam Rateike · Golnoosh Farnadi · Ferdinando Fioretto · Jessica Schrouff -
2022 : Q & A »
Golnoosh Farnadi · Elliot Creager · Q.Vera Liao -
2022 : Tutorial part 1 »
Golnoosh Farnadi -
2022 Tutorial: Algorithmic fairness: at the intersections »
Golnoosh Farnadi · Q.Vera Liao · Elliot Creager -
2022 Workshop: Algorithmic Fairness through the Lens of Causality and Privacy »
Awa Dieng · Miriam Rateike · Golnoosh Farnadi · Ferdinando Fioretto · Matt Kusner · Jessica Schrouff -
2022 : Secure Multiparty Computation for Synthetic Data Generation from Distributed Data »
Mayana Pereira · Sikha Pentyala · Martine De Cock · Anderson Nascimento · Rafael Timóteo de Sousa Júnior -
2021 Workshop: Algorithmic Fairness through the lens of Causality and Robustness »
Jessica Schrouff · Awa Dieng · Golnoosh Farnadi · Mark Kwegyir-Aggrey · Miriam Rateike -
2020 Workshop: Algorithmic Fairness through the Lens of Causality and Interpretability »
Awa Dieng · Jessica Schrouff · Matt Kusner · Golnoosh Farnadi · Fernando Diaz -
2020 Poster: Counterexample-Guided Learning of Monotonic Neural Networks »
Aishwarya Sivaraman · Golnoosh Farnadi · Todd Millstein · Guy Van den Broeck -
2019 Poster: Privacy-Preserving Classification of Personal Text Messages with Secure Multi-Party Computation »
Devin Reich · Ariel Todoki · Rafael Dowsley · Martine De Cock · Anderson Nascimento -
2017 : Poster Sessions »
Dennis Forster · David I Inouye · Shashank Srivastava · Martine De Cock · Srinagesh Sharma · Mateusz Kozinski · Petr Babkin · maxime he · Zhe Cui · Shivani Rao · Ramesh Raskar · Pradipto Das · Albert Zhao · Ravi Lanka