Timezone: »
Differential Privacy (DP) is an important privacy-enhancing technology for private machine learning systems. It allows to measure and bound the risk associated with an individual participation in a computation. However, it was recently observed that DP learning systems may exacerbate bias and unfairness for different groups of individuals. This paper builds on these important observations and sheds light on the causes of the disparate impacts arising in the problem of differentially private empirical risk minimization. It focuses on the accuracy disparity arising among groups of individuals in two well-studied DP learning methods: output perturbation and differentially private stochastic gradient descent. The paper analyzes which data and model properties are responsible for the disproportionate impacts, why these aspects are affecting different groups disproportionately, and proposes guidelines to mitigate these effects. The proposed approach is evaluated on several datasets and settings.
Author Information
Cuong Tran (Syracuse University)
My Dinh
Ferdinando Fioretto (Syracuse University)

I am an assistant professor of Computer Science at UVA. I lead the Responsible AI for Science and Engineering (RAISE) group where we make advances in artificial intelligence with focus on two key themes: - AI for Science and Engineering: We develop the foundations to blend deep learning and constrained optimization for complex scientific and engineering problems. - Trustworthy & Responsible AI: We analyze the equity of AI systems in support of decision-making and learning tasks, focusing especially on privacy and fairness.
More from the Same Authors
-
2023 Poster: Data Minimization at Inference Time »
Cuong Tran · Ferdinando Fioretto -
2023 Workshop: Algorithmic Fairness through the Lens of Time »
Awa Dieng · Miriam Rateike · Golnoosh Farnadi · Ferdinando Fioretto · Jessica Schrouff -
2022 Spotlight: Pruning has a disparate impact on model accuracy »
Cuong Tran · Ferdinando Fioretto · Jung-Eun Kim · Rakshit Naidu -
2022 : Panel »
Ferdinando Fioretto · Amir-Hossein Karimi · Pratyusha Kalluri · Reza Shokri · Elizabeth Watkins · Su Lin Blodgett -
2022 Workshop: Algorithmic Fairness through the Lens of Causality and Privacy »
Awa Dieng · Miriam Rateike · Golnoosh Farnadi · Ferdinando Fioretto · Matt Kusner · Jessica Schrouff -
2022 Poster: Pruning has a disparate impact on model accuracy »
Cuong Tran · Ferdinando Fioretto · Jung-Eun Kim · Rakshit Naidu -
2021 Poster: Learning Hard Optimization Problems: A Data Generation Perspective »
James Kotary · Ferdinando Fioretto · Pascal Van Hentenryck