Timezone: »

 
Workshop
Algorithmic Fairness through the Lens of Causality and Interpretability
Awa Dieng · Jessica Schrouff · Matt Kusner · Golnoosh Farnadi · Fernando Diaz

Sat Dec 12 01:00 AM -- 12:10 PM (PST) @
Event URL: https://www.afciworkshop.org »

Black-box machine learning models have gained widespread deployment in decision-making settings across many parts of society, from sentencing decisions to medical diagnostics to loan lending. However, many models were found to be biased against certain demographic groups. Initial work on Algorithmic fairness focused on formalizing statistical measures of fairness, that could be used to train new classifiers. While these models were an important first step towards addressing fairness concerns, there were immediate challenges with them. Causality has recently emerged as a powerful tool to address these shortcomings. Causality can be seen as a model-first approach: starting with the language of structural causal models or potential outcomes, the idea is to frame, then solve questions of algorithmic fairness in this language. Such causal definitions of fairness can have far-reaching impact, especially in high risk domains. Interpretability on the other hand can be viewed as a user-first approach: can the ways in which algorithms work be made more transparent, making it easier for them to align with our societal values on fairness? In this way, Interpretability can sometimes be more actionable than Causality work.

Given these initial successes, this workshop aims to more deeply investigate how open questions in algorithmic fairness can be addressed with Causality and Interpretability. Questions such as: What improvements can causal definitions provide compared to existing statistical definitions of fairness? How can causally grounded methods help develop more robust fairness algorithms in practice? What tools for interpretability are useful for detecting bias and building fair systems? What are good formalizations of interpretability when addressing fairness questions?

Website: www.afciworkshop.org

Author Information

Awa Dieng (Google)

My research interests span machine learning, causal inference, fairness, and interpretability.

Jessica Schrouff (Google Research)
Jessica Schrouff

I am a Senior Research Scientist at DeepMind since 2022. I joined Alphabet in 2019 as part of Google Research working on trustworthy machine learning for healthcare. Before that, I was a postdoctoral researcher at University College London and Stanford University studying machine learning for neuroscience. My current interests lie at the intersection of trustworthy machine learning and causality.

Matt Kusner (University College London)
Golnoosh Farnadi (Mila)
Fernando Diaz (Google)

Fernando Diaz is a research scientist at Google Brain Montréal. His research focuses on the design of information access systems, including search engines, music recommendation services and crisis response platforms is particularly interested in understanding and addressing the societal implications of artificial intelligence more generally. Previously, Fernando was the assistant managing director of Microsoft Research Montréal and a director of research at Spotify, where he helped establish its research organization on recommendation, search, and personalization. Fernando’s work has received awards at SIGIR, WSDM, ISCRAM, and ECIR. He is the recipient of the 2017 British Computer Society Karen Spärck Jones Award. Fernando has co-organized workshops and tutorials at SIGIR, WSDM, and WWW. He has also co-organized several NIST TREC initiatives, WSDM (2013), Strategic Workshop on Information Retrieval (2018), FAT* (2019), SIGIR (2021), and the CIFAR Workshop on Artificial Intelligence and the Curation of Culture (2019)

More from the Same Authors