Timezone: »

 
Poster
Differentially Private Empirical Risk Minimization under the Fairness Lens
Cuong Tran · My Dinh · Ferdinando Fioretto

Thu Dec 09 08:30 AM -- 10:00 AM (PST) @

Differential Privacy (DP) is an important privacy-enhancing technology for private machine learning systems. It allows to measure and bound the risk associated with an individual participation in a computation. However, it was recently observed that DP learning systems may exacerbate bias and unfairness for different groups of individuals. This paper builds on these important observations and sheds light on the causes of the disparate impacts arising in the problem of differentially private empirical risk minimization. It focuses on the accuracy disparity arising among groups of individuals in two well-studied DP learning methods: output perturbation and differentially private stochastic gradient descent. The paper analyzes which data and model properties are responsible for the disproportionate impacts, why these aspects are affecting different groups disproportionately, and proposes guidelines to mitigate these effects. The proposed approach is evaluated on several datasets and settings.

Author Information

Cuong Tran (Syracuse University)
My Dinh
Ferdinando Fioretto (Syracuse University)
Ferdinando Fioretto

I am an assistant professor of Computer Science at UVA. I lead the Responsible AI for Science and Engineering (RAISE) group where we make advances in artificial intelligence with focus on two key themes: - AI for Science and Engineering: We develop the foundations to blend deep learning and constrained optimization for complex scientific and engineering problems. - Trustworthy & Responsible AI: We analyze the equity of AI systems in support of decision-making and learning tasks, focusing especially on privacy and fairness.

More from the Same Authors