Skip to yearly menu bar Skip to main content

Workshop: Algorithmic Fairness through the Lens of Time

On The Vulnerability of Fairness Constrained Learning to Malicious Noise

Avrim Blum · Princewill Okoroafor · Aadirupa Saha · Kevin Stangl


We consider the vulnerability of fairness-constrained learning to small amounts of malicious noise in the training data. [27] initiated the study of this question and presented negative results showing there exist data distributions where for several fairness constraints, any proper learner will exhibit high vulnerability when group sizes are imbalanced. Here, we present a more optimistic view, showing that if we allow randomized classifiers, then the landscape is much more nuanced. For example, for Demographic Parity we show we can incur only a Θ(α) loss in accuracy, where α is the malicious noise rate, matching the best possible even without fairness constraints. For Equal Opportunity, we show we can incur an O(√α) loss, and give a matching Ω(√α) lower bound. In contrast, [27] showed for proper learners the loss in accuracy for both notions is Ω(1). The key technical novelty of our work is how randomization can bypass simple "tricks" an adversary can use to amplify his power. We also consider additional fairness notions including Equalized Odds and Calibration. For these fairness notions, the excess accuracy clusters into three natural regimes O(α),O(√α), and O(1). These results provide a more fine-grained view of the sensitivity of fairness-constrained learning to adversarial noise in training data.

Chat is not available.