Skip to yearly menu bar Skip to main content


Workshop

Algorithmic Fairness through the lens of Causality and Robustness

Jessica Schrouff · Awa Dieng · Golnoosh Farnadi · Mark Kwegyir-Aggrey · Miriam Rateike

Trustworthy machine learning (ML) encompasses multiple fields of research, including (but not limited to) robustness, algorithmic fairness, interpretability and privacy. Recently, relationships between techniques and metrics used across different fields of trustworthy ML have emerged, leading to interesting work at the intersection of algorithmic fairness, robustness, and causality.

On one hand, causality has been proposed as a powerful tool to address the limitations of initial statistical definitions of fairness. However, questions have emerged regarding the applicability of such approaches in practice and the suitability of a causal framing for studies of bias and discrimination. On the other hand, the Robustness literature has surfaced promising approaches to improve fairness in ML models. For instance, parallels can be shown between individual fairness and local robustness guarantees. In addition, the interactions between fairness and robustness can help us understand how fairness guarantees hold under distribution shift or adversarial/poisoning attacks.

After a first edition of this workshop that focused on causality and interpretability, we will turn to the intersectionality between algorithmic fairness and recent techniques in causality and robustness. In this context, we will investigate how these different topics relate, but also how they can augment each other to provide better or more suited definitions and mitigation strategies for algorithmic fairness. We are particularly interested in addressing open questions in the field, such as:
- How can causally grounded fairness methods help develop more robust and fair algorithms in practice?
- What is an appropriate causal framing in studies of discrimination?
- How do approaches for adversarial/poisoning attacks target algorithmic fairness?
- How do fairness guarantees hold under distribution shift?

Chat is not available.
Timezone: America/Los_Angeles

Schedule