Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Regulatable ML: Towards Bridging the Gaps between Machine Learning Research and Regulations

Necessity of Processing Sensitive Data for Bias Detection and Monitoring: A Techno-Legal Exploration

Ioanna Papageorgiou · Carlos Mougan


Abstract:

This paper explores the intersection of the upcoming AI Regulation and fair ML research, specifically examining the legal principle of "necessity" in the context of processing sensitive personal data for bias detection and monitoring in AI systems. Drawing upon Article 10 (5) of the AI Act, currently under negotiation, and the General Data Protection Regulation, we investigate the challenges posed by the nuanced concept of "necessity" in enabling AI providers to process sensitive personal data for bias detection and bias monitoring. The lack of guidance regarding this binding textual requirement creates significant legal uncertainty for all parties involved and risks a purposeful and inconsistent legal application. To address this issue from a techno-legal perspective, we delve into the core of the necessity principle and map it to current approaches in fair machine learning. Our objective is to bridge operational gaps between the forthcoming AI Act and the evolving field of fair ML and support an integrative approach of non-discrimination and data protection desiderata in the conception of fair ML, thereby facilitating regulatory compliance.

Chat is not available.