Poster
Inherent Tradeoffs in Learning Fair Representations
Han Zhao · Geoff Gordon
East Exhibition Hall B, C #111
Keywords: [ Fairness, Accountability, and Transparency ] [ Applications ] [ Algorithms -> Adversarial Learning; Algorithms -> Representation Learning; Deep Learning ] [ Adversarial Networks ]
[
Abstract
]
Abstract:
With the prevalence of machine learning in high-stakes applications, especially the ones regulated by anti-discrimination laws or societal norms, it is crucial to ensure that the predictive models do not propagate any existing bias or discrimination. Due to the ability of deep neural nets to learn rich representations, recent advances in algorithmic fairness have focused on learning fair representations with adversarial techniques to reduce bias in data while preserving utility simultaneously. In this paper, through the lens of information theory, we provide the first result that quantitatively characterizes the tradeoff between demographic parity and the joint utility across different population groups. Specifically, when the base rates differ between groups, we show that any method aiming to learn fair representations admits an information-theoretic lower bound on the joint error across these groups. To complement our negative results, we also prove that if the optimal decision functions across different groups are close, then learning fair representations leads to an alternative notion of fairness, known as the accuracy parity, which states that the error rates are close between groups. Finally, our theoretical findings are also confirmed empirically on real-world datasets.
Live content is unavailable. Log in and register to view live content