Timezone: »

 
Workshop
Algorithmic Fairness through the Lens of Causality and Interpretability
Awa Dieng · Jessica Schrouff · Matt J Kusner · Golnoosh Farnadi · Fernando Diaz

Sat Dec 12 01:00 AM -- 12:10 PM (PST) @ None
Event URL: https://www.afciworkshop.org »

Black-box machine learning models have gained widespread deployment in decision-making settings across many parts of society, from sentencing decisions to medical diagnostics to loan lending. However, many models were found to be biased against certain demographic groups. Initial work on Algorithmic fairness focused on formalizing statistical measures of fairness, that could be used to train new classifiers. While these models were an important first step towards addressing fairness concerns, there were immediate challenges with them. Causality has recently emerged as a powerful tool to address these shortcomings. Causality can be seen as a model-first approach: starting with the language of structural causal models or potential outcomes, the idea is to frame, then solve questions of algorithmic fairness in this language. Such causal definitions of fairness can have far-reaching impact, especially in high risk domains. Interpretability on the other hand can be viewed as a user-first approach: can the ways in which algorithms work be made more transparent, making it easier for them to align with our societal values on fairness? In this way, Interpretability can sometimes be more actionable than Causality work.

Given these initial successes, this workshop aims to more deeply investigate how open questions in algorithmic fairness can be addressed with Causality and Interpretability. Questions such as: What improvements can causal definitions provide compared to existing statistical definitions of fairness? How can causally grounded methods help develop more robust fairness algorithms in practice? What tools for interpretability are useful for detecting bias and building fair systems? What are good formalizations of interpretability when addressing fairness questions?

Website: www.afciworkshop.org

Sat 1:47 a.m. - 1:55 a.m.

Submit your questions on Rocketchat and the moderator will convey them to the speaker

Sat 1:55 a.m. - 2:25 a.m.
Video

Ultimately, we want the world to be less unfair by changing it. Just making fair passive predictions is not enough, so our decisions will eventually have an effect on how a societal system works. We will discuss ways of modelling hypothetical interventions so that particular measures of counterfactual fairness are respected: that is, how are sensitivity attributes interacting with our actions to cause an unfair distribution outcomes, and that being the case how do we mitigate such uneven impacts within the space of feasible actions? To make matters even harder, interference is likely: what happens to one individual may affect another. We will discuss how to express assumptions about and consequences of such causative factors for fair policy making, accepting that this is a daunting task but that we owe the public an explanation of our reasoning. Joint work with Matt Kusner, Chris Russell and Joshua Loftus

Ricardo Silva
Sat 2:25 a.m. - 2:30 a.m.

Submit your questions in Rocket.chat and the moderator will convey them to the speaker.

To ask questions live, please join the Zoom call. We highly encourage that you use rocketchat.

Please join the speaker during the breakout sessions for more discussions.

Sat 2:38 a.m. - 2:40 a.m.

Please join the authors on Gather.Town during the poster sessions for questions.

Feel free to submit your questions on Rocketchat and the moderator will convey them to the authors.

Sat 3:10 a.m. - 3:13 a.m.
Introduction to invited talk by Hoda Heidari (Introduction to speaker)
Sat 3:45 a.m. - 3:55 a.m.
Short break -- Join us on Gathertown (short break)
Sat 3:55 a.m. - 4:55 a.m.

Please join the Zoom for breakout discussions. In case the Zoom is full, you can join the Breakouts through Gather.Town at the corresponding table.

Fairness in Health: 11:55 AM - 12:55PM onlinequestions event ID: 12122001 Zoom: https://ucl.zoom.us/j/98811169765?pwd=SStyWFNmdFlUQUFnekt4Q2FWSXhYQT09

Q&A with Ricardo Silva: 11:55 AM - 12:55PM onlinequestions event ID: 12122002 Zoom: https://ucl.zoom.us/j/91814715763?pwd=dmZkWkh6ZmN4bWN3WjY2L0dpakE2Zz09

Q&A with Hoda Heidari: 11:55 AM - 12:55PM onlinequestions event ID: 12122003 https://us02web.zoom.us/j/89640547267?pwd=RzVZOW9ISmtaSmhLaE5BTFJnRFdtUT09

Sat 4:55 a.m. - 5:00 a.m.

Please join the gather.town for the poster session.

Sat 7:55 a.m. - 8:00 a.m.
Introduction to invited talk by Jon Kleinberg (Introduction to speaker)
Sat 8:32 a.m. - 8:40 a.m.

Submit your questions in Rocket.chat and the moderator will convey them to the speaker.

To ask questions live, please join the Zoom call. We highly encourage that you use rocketchat.

Please join the speaker during the breakout sessions for more discussions.

Sat 9:15 a.m. - 9:18 a.m.
Introduction to invited talk by Lily Hu (Introduction to speaker)

Author Information

Awa Dieng (Google)

My research interests span machine learning, causal inference, fairness, and interpretability.

Jessica Schrouff (Google Research)
Matt J Kusner (University College London)
Golnoosh Farnadi (Mila)
Fernando Diaz (Google)

Fernando Diaz is a research scientist at Google Brain Montréal. His research focuses on the design of information access systems, including search engines, music recommendation services and crisis response platforms is particularly interested in understanding and addressing the societal implications of artificial intelligence more generally. Previously, Fernando was the assistant managing director of Microsoft Research Montréal and a director of research at Spotify, where he helped establish its research organization on recommendation, search, and personalization. Fernando’s work has received awards at SIGIR, WSDM, ISCRAM, and ECIR. He is the recipient of the 2017 British Computer Society Karen Spärck Jones Award. Fernando has co-organized workshops and tutorials at SIGIR, WSDM, and WWW. He has also co-organized several NIST TREC initiatives, WSDM (2013), Strategic Workshop on Information Retrieval (2018), FAT* (2019), SIGIR (2021), and the CIFAR Workshop on Artificial Intelligence and the Curation of Culture (2019)

More from the Same Authors