Timezone: »
Existing bias mitigation algorithms in machine learning (ML) based decision-making systems assume that the sensitive attributes of the user are available to a central entity. This violates the privacy of the users. Achieving fairness in Federated Learning (FL), which intends to protect the raw data of the users, is a challenge as the bias mitigation algorithms inherently require access to sensitive attributes. We work towards resolving the conflict of privacy and fairness by combining FL with Secure Multi-Party Computation and Differential Privacy. In our work, we propose methods to train group-fair models in cross-device FL under complete privacy guarantees. We demonstrate the effectiveness of our solution on two real-world datasets in achieving group fairness.
Author Information
Sikha Pentyala (University of Washington, Tacoma)
Nicola Neophytou (Mila / Universite de Montreal)
Anderson Nascimento (University of Washington Tacoma)
Martine De Cock (University of Washington Tacoma)
Golnoosh Farnadi (Mila)
More from the Same Authors
-
2021 : Label Private Deep Learning Training based on Secure Multiparty Computation and Differential Privacy »
Sen Yuan · Milan Shen · Ilya Mironov · Anderson Nascimento -
2022 : Exposure Fairness in Music Recommendation »
Rebecca Salganik · Fernando Diaz · Golnoosh Farnadi -
2022 : Mitigating Online Grooming with Federated Learning »
Khaoula Chehbouni · Gilles Caporossi · Reihaneh Rabbany · Martine De Cock · Golnoosh Farnadi -
2022 : Fair Targeted Immunization with Dynamic Influence Maximization »
Nicola Neophytou · Golnoosh Farnadi -
2022 : Early Detection of Sexual Predators with Federated Learning »
Khaoula Chehbouni · Gilles Caporossi · Reihaneh Rabbany · Martine De Cock · Golnoosh Farnadi -
2022 : Privacy-Preserving Group Fairness in Cross-Device Federated Learning »
Sikha Pentyala · Nicola Neophytou · Anderson Nascimento · Martine De Cock · Golnoosh Farnadi -
2023 Workshop: Algorithmic Fairness through the Lens of Time »
Awa Dieng · Miriam Rateike · Golnoosh Farnadi · Ferdinando Fioretto · Jessica Schrouff -
2022 : Q & A »
Golnoosh Farnadi · Elliot Creager · Q.Vera Liao -
2022 : Tutorial part 1 »
Golnoosh Farnadi -
2022 Tutorial: Algorithmic fairness: at the intersections »
Golnoosh Farnadi · Q.Vera Liao · Elliot Creager -
2022 Workshop: Algorithmic Fairness through the Lens of Causality and Privacy »
Awa Dieng · Miriam Rateike · Golnoosh Farnadi · Ferdinando Fioretto · Matt Kusner · Jessica Schrouff -
2022 : Secure Multiparty Computation for Synthetic Data Generation from Distributed Data »
Mayana Pereira · Sikha Pentyala · Martine De Cock · Anderson Nascimento · Rafael Timóteo de Sousa Júnior -
2021 Workshop: Algorithmic Fairness through the lens of Causality and Robustness »
Jessica Schrouff · Awa Dieng · Golnoosh Farnadi · Mark Kwegyir-Aggrey · Miriam Rateike -
2020 Workshop: Algorithmic Fairness through the Lens of Causality and Interpretability »
Awa Dieng · Jessica Schrouff · Matt Kusner · Golnoosh Farnadi · Fernando Diaz -
2020 Poster: Counterexample-Guided Learning of Monotonic Neural Networks »
Aishwarya Sivaraman · Golnoosh Farnadi · Todd Millstein · Guy Van den Broeck -
2019 Poster: Privacy-Preserving Classification of Personal Text Messages with Secure Multi-Party Computation »
Devin Reich · Ariel Todoki · Rafael Dowsley · Martine De Cock · Anderson Nascimento -
2017 : Poster Sessions »
Dennis Forster · David I Inouye · Shashank Srivastava · Martine De Cock · Srinagesh Sharma · Mateusz Kozinski · Petr Babkin · maxime he · Zhe Cui · Shivani Rao · Ramesh Raskar · Pradipto Das · Albert Zhao · Ravi Lanka