Measure Twice, Cut Once: Quantifying Bias and Fairness in Deep Networks
Ziliang Zong · Cody Blakeney · Gentry Atkinson · Nathaniel Huish · · Vangelis Metsis
2021 Poster
in
Workshop: Algorithmic Fairness through the lens of Causality and Robustness
in
Workshop: Algorithmic Fairness through the lens of Causality and Robustness
Abstract
AI systems come with serious concerns of bias and fairness. Algorithmic bias is more abstract and unintuitive than traditional forms of discrimination and can be more difficult to detect and mitigate. A clear gap exists in the current literature on evaluating the relative bias in the performance of multi-class classifiers. In this work, we propose two simple yet effective metrics, Combined Error Variance (CEV) and Symmetric Distance Error (SDE), to quantitatively evaluate the class-wise bias of two models in comparison to one another. We evaluate the performance of these new metrics by demonstrating practical use cases with pre-trained models and show that they can be used to measure fairness as well as bias.
Video
Chat is not available.
Successful Page Load