Timezone: »

Algorithmic Fairness through the Lens of Causality and Privacy
Awa Dieng · Miriam Rateike · Golnoosh Farnadi · Ferdinando Fioretto · Matt Kusner · Jessica Schrouff

Sat Dec 03 05:30 AM -- 02:55 PM (PST) @ Room 392
Event URL: https://www.afciworkshop.org/ »

As machine learning models permeate every aspect of decision making systems in consequential areas such as healthcare and criminal justice, it has become critical for these models to satisfy trustworthiness desiderata such as fairness, interpretability, accountability, privacy and security. Initially studied in isolation, recent work has emerged at the intersection of these different fields of research, leading to interesting questions on how fairness can be achieved using a causal perspective and under privacy concerns.

Indeed, the field of causal fairness has seen a large expansion in recent years notably as a way to counteract the limitations of initial statistical definitions of fairness. While a causal framing provides flexibility in modelling and mitigating sources of bias using a causal model, proposed approaches rely heavily on assumptions about the data generating process, i.e., the faithfulness and ignorability assumptions. This leads to open discussions on (1) how to fully characterize causal definitions of fairness, (2) how, if possible, to improve the applicability of such definitions, and (3) what constitutes a suitable causal framing of bias from a sociotechnical perspective?

Additionally, while most existing work on causal fairness assumes observed sensitive attribute data, such information is likely to be unavailable due to, for example, data privacy laws or ethical considerations. This observation has motivated initial work on training and evaluating fair algorithms without access to sensitive information and studying the compatibility and trade-offs between fairness and privacy. However, such work has been limited, for the most part, to statistical definitions of fairness raising the question of whether these methods can be extended to causal definitions.

Given the interesting questions that emerge at the intersection of these different fields, this workshop aims to deeply investigate how these different topics relate, but also how they can augment each other to provide better or more suited definitions and mitigation strategies for algorithmic fairness.

Author Information

Awa Dieng (Google Brain)

My research interests span machine learning, causal inference, fairness, and interpretability.

Miriam Rateike (Max Planck Institute for Intelligent Systems & Saarland University)
Golnoosh Farnadi (Mila)
Ferdinando Fioretto (Syracuse University)
Ferdinando Fioretto

I am an assistant professor of Computer Science at UVA. I lead the Responsible AI for Science and Engineering (RAISE) group where we make advances in artificial intelligence with focus on two key themes: - AI for Science and Engineering: We develop the foundations to blend deep learning and constrained optimization for complex scientific and engineering problems. - Trustworthy & Responsible AI: We analyze the equity of AI systems in support of decision-making and learning tasks, focusing especially on privacy and fairness.

Matt Kusner (University College London)
Jessica Schrouff (DeepMind)
Jessica Schrouff

I am a Senior Research Scientist at DeepMind since 2022. I joined Alphabet in 2019 as part of Google Research working on trustworthy machine learning for healthcare. Before that, I was a postdoctoral researcher at University College London and Stanford University studying machine learning for neuroscience. My current interests lie at the intersection of trustworthy machine learning and causality.

More from the Same Authors