Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Algorithmic Fairness through the Lens of Causality and Privacy

Addressing observational biases in algorithmic fairness assessments

Chirag Nagpal · Olawale Salaudeen · Sanmi Koyejo · Stephen Pfohl


Abstract:

The objective of this early work is to characterize the implications for model development and evaluation of observational biases with subgroup-dependent structure (including selection bias, measurement error, label bias, or censoring that differs across subgroups), extending work that aims to characterize and resolve conflicts among statistical fairness criteria in the absence of such biases. These biases pose challenges because naive approaches to model fitting produce statistically biased results, with potential fairness harms induced by systematic, differential misestimation across subgroups, and it is challenging, or impossible in some contexts, to detect such biases without additional data or domain knowledge. As an example, we present an illustrative case study in the setting with differential censoring across subgroups.

Chat is not available.