Timezone: »

Stochastic Differentially Private and Fair Learning
Andrew Lowy · Devansh Gupta · Meisam Razaviyayn

Sat Dec 03 09:45 AM -- 09:55 AM (PST) @
Event URL: https://drive.google.com/file/d/128yRLZrUFt96GoLBYzK-n5yydhZiIPQd/view?usp=sharing »

A major concern with the use of machine learning (ML) models for high-stakes decision-making (e.g criminal sentencing or commercial lending) is that these models sometimes discriminate against certain demographic groups (e.g. race, gender, age). Fair learning algorithms have been developed to address this issue, but these algorithms can still leak sensitive information (e.g. race, gender, age). Differential privacy (DP) guarantees that sensitive data cannot be leaked. Existing algorithms for DP fair learning are impractical for training large-scale models since they either: a) require computations on the full data set in each iteration of training; or b) are not guaranteed to converge. In this paper, we provide the first efficient differentially private algorithm for fair learning that is guaranteed to converge, even when minibatches of data are used (i.e. stochastic optimization). Our framework is flexible enough to permit different fairness notions (e.g. demographic parity, equalized odds) and non-binary classification with multiple (non-binary) sensitive attributes. Along the way, we provide the first utility guarantee for a DP algorithm for solving nonconvex-strongly concave min-max problems. Extensive numerical experiments show that our algorithm consistently offers significant performance gains vs. state-of-the-art DP fair baselines. Moreover, our algorithm is amenable to large-scale ML with non-binary targets and non-binary sensitive attributes.

Author Information

Andrew Lowy (USC)
Devansh Gupta (Indraprastha Institute of Information Technology, Delhi)
Meisam Razaviyayn (University of Southern California)

More from the Same Authors