It's complicated. The relationship of algorithmic fairness and non-discrimination regulations for high-risk systems in the EU AI Act
Abstract
What constitutes a fair decision? This question is not only difficult to answer for humans but becomes more challenging when Artificial Intelligence (AI) models are used. In light of problematic algorithmic outcomes, the EU has recently passed the AI Act, which mandates specific rules for high-risk systems, incorporating both traditional legal non-discrimination regulations and machine learning based algorithmic fairness concepts. This paper aims to bridge these two concepts in the AI Act by providing: (1) a high-level introduction targeting computer science-oriented scholars, and (2) an analysis of the relationship between the AI Act’s legal non-discrimination regulations and its algorithmic fairness provisions. Finally, we consider future steps in the application of non-discrimination regulations and the AI Act regulations. This paper serves as a foundation for future interdisciplinary collaboration between legal scholars and machine learning researchers with a computer science background studying discrimination in AI systems.