Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Algorithmic Fairness through the Lens of Causality and Privacy

Causal Discovery for Fairness

Ruta Binkyte-Sadauskiene · Karima Makhlouf · Carlos Pinzon · Sami Zhioua · Catuscia Palamidessi


Abstract:

Fairness guarantees that the ML decisions do not result in discrimination against individuals or minorities. Identifying and measuring reliably fairness/discrimination is better achieved using causality which considers the causal relation, beyond mere association, between the sensitive attribute (e.g. gender, race, religion, etc.) and the decision (e.g. job hiring, loan granting, etc.). The big impediment to the use of causality to address fairness, however, is the unavailability of the causal model (typically represented as a causal graph). Existing causal approaches to fairness in the literature do not address this problem and assume that the causal model is available. In this paper, we do not make such assumption and we review the major algorithms to discover causal relations from observable data. In particular, we show how different causal discovery approaches may result in different causal models and, most importantly, how even slight differences between causal models can have significant impact on fairness/discrimination conclusions.

Chat is not available.