Timezone: »

Fairness Certificates for Differentially Private Classification
Paul Mangold · Michaël Perrot · Marc Tommasi · Aurélien Bellet

In this work, we theoretically study the impact of differential privacy on fairness in binary classification. We prove that, given a class of models, popular group fairness measures are pointwise Lipschitz-continuous with respect to the parameters of the model. This result is a consequence of a more general statement on the probability that a decision function makes a negative prediction conditioned on an arbitrary event (such as membership to a sensitive group), which may be of independent interest. We use the aforementioned Lipschitz property to prove a high probability bound showing that, given enough examples, the fairness level of private models is close to the one of their non-private counterparts.

Author Information

Paul Mangold (Inria Lille)
Michaël Perrot (INRIA)
Marc Tommasi (INRIA)
Aurélien Bellet (INRIA)

More from the Same Authors