Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Regulatable ML: Towards Bridging the Gaps between Machine Learning Research and Regulations

Learning to Walk Impartially on the Pareto Frontier of Fairness, Privacy, and Utility

Mohammad Yaghini · Patty Liu · Franziska Boenisch · Nicolas Papernot


Abstract:

Deploying machine learning (ML) models often requires both fairness and privacy guarantees. Both objectives often present notable trade-offs with the accuracy of the model—the primary focus of most applications. Thus, utility is prioritized while privacy and fairness constraints are treated as simple hyperparameters. In this work, we argue that by prioritizing one objective over others, we disregard more favorable solutions where at least certain objectives could have been improved without degrading any other. We adopt impartiality as a design principle: ML pipelines should not favor one objective over another. We theoretically show that a common ML pipeline design that features an unfairness mitigation step followed by private training is non-impartial. Then, parting from the two most common privacy frameworks for ML, we propose FairDP-SGD and FairPATE to train impartially specified private and fair models. Because impartially specified models recover the Pareto frontiers, i.e., the best trade-offs between different objectives, we show that they yield significantly better trade-offs than models optimized for one objective and hyperparameter-tuned for the others. Thus, our approach allows us to mitigate tensions between objectives previously found incompatible.

Chat is not available.