Skip to yearly menu bar Skip to main content


Spotlight
in
Workshop: Algorithmic Fairness through the Lens of Time

Information-Theoretic Bounds on The Removal of Attribute-Specific Bias From Neural Networks

Jiazhi Li · Mahyar Khayatkhoei · Jiageng Zhu · Hanchen Xie · Mohamed Hussein · Wael Abd-Almageed

[ ] [ Project Page ]
[ Slides
Fri 15 Dec 11 a.m. PST — 11:03 a.m. PST
 
presentation: Algorithmic Fairness through the Lens of Time
Fri 15 Dec 7 a.m. PST — 3:30 p.m. PST

Abstract:

Ensuring a neural network is not relying on protected attributes (e.g., race, sex, age) for predictions is crucial in advancing fair and trustworthy AI. While several promising methods for removing attribute bias in neural networks have been proposed, their limitations remain under-explored. In this work, we mathematically and empirically reveal an important limitation of attribute bias removal methods in presence of strong bias. Specifically, we derive a general non-vacuous information-theoretical upper bound on the performance of any attribute bias removal method in terms of the bias strength. We provide extensive experiments on synthetic, image, and census datasets to verify the theoretical bound and its consequences in practice. Our findings show that existing attribute bias removal methods are effective only when the inherent bias in the dataset is relatively weak, thus cautioning against the use of these methods in smaller datasets where strong attribute bias can occur, and advocating the need for methods that can overcome this limitation.

Chat is not available.