Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Regulatable ML: Towards Bridging the Gaps between Machine Learning Research and Regulations

Limitations of the “Four-Fifths Rule” and Statistical Parity Tests for Measuring Fairness

Manish Raghavan · Pauline Kim


Abstract:

Algorithmic tools in employment contexts are often evaluated via the ""four-fifths rule," which measures disparities in selection rates between legally protected groups. While they have their origins in anti-discrimination law, the "four-fifths rule" and related statistical parity tests are flawed measures of discrimination. In this paper, we trace the origins of this class of tests through the law and computer science literatures and detail their limitations as applied to algorithmic employment tools, with a particular focus on the shift from retrospective auditing to prospective optimization. We then discuss the appropriate role for statistical parity tests in algorithmic governance, suggesting a combination of measures that may be more suitable for building and auditing models.

Chat is not available.