Skip to yearly menu bar Skip to main content


Contributed Talk
in
Workshop: Machine Learning with Guarantees

Hussein Mozannar, "Fair Learning with Private Data"

Hussein Mozannar


Abstract:

We study learning non-discriminatory predictors when the protected attributes are privatized or noisy. We observe that, in the population limit, non-discrimination against noisy attributes is equivalent to that against original attributes. We show this to hold for various fairness criteria. We then characterize the amount of difficulty, in sample complexity, that privacy adds to testing non-discrimination. Using this relationship, we propose how to carefully adapt existing non-discriminatory learners to work with privatized protected attributes. Care is crucial, as naively using these learners may create the illusion of non-discrimination, while continuing to be highly discriminatory.

Live content is unavailable. Log in and register to view live content