Timezone: »
Black-box machine learning models have gained widespread deployment in decision-making settings across many parts of society, from sentencing decisions to medical diagnostics to loan lending. However, many models were found to be biased against certain demographic groups. Initial work on Algorithmic fairness focused on formalizing statistical measures of fairness, that could be used to train new classifiers. While these models were an important first step towards addressing fairness concerns, there were immediate challenges with them. Causality has recently emerged as a powerful tool to address these shortcomings. Causality can be seen as a model-first approach: starting with the language of structural causal models or potential outcomes, the idea is to frame, then solve questions of algorithmic fairness in this language. Such causal definitions of fairness can have far-reaching impact, especially in high risk domains. Interpretability on the other hand can be viewed as a user-first approach: can the ways in which algorithms work be made more transparent, making it easier for them to align with our societal values on fairness? In this way, Interpretability can sometimes be more actionable than Causality work.
Given these initial successes, this workshop aims to more deeply investigate how open questions in algorithmic fairness can be addressed with Causality and Interpretability. Questions such as: What improvements can causal definitions provide compared to existing statistical definitions of fairness? How can causally grounded methods help develop more robust fairness algorithms in practice? What tools for interpretability are useful for detecting bias and building fair systems? What are good formalizations of interpretability when addressing fairness questions?
Website: www.afciworkshop.org
Sat 1:47 a.m. - 1:55 a.m.
|
Tutorial: Questions
(
Live Q&A between the speaker and moderator
)
link »
Submit your questions on Rocketchat and the moderator will convey them to the speaker |
🔗 |
Sat 1:55 a.m. - 2:25 a.m.
|
Invited Talk: On Prediction, Action and Interference
(
Invited Talk with live questions
)
link »
SlidesLive Video » Ultimately, we want the world to be less unfair by changing it. Just making fair passive predictions is not enough, so our decisions will eventually have an effect on how a societal system works. We will discuss ways of modelling hypothetical interventions so that particular measures of counterfactual fairness are respected: that is, how are sensitivity attributes interacting with our actions to cause an unfair distribution outcomes, and that being the case how do we mitigate such uneven impacts within the space of feasible actions? To make matters even harder, interference is likely: what happens to one individual may affect another. We will discuss how to express assumptions about and consequences of such causative factors for fair policy making, accepting that this is a daunting task but that we owe the public an explanation of our reasoning. Joint work with Matt Kusner, Chris Russell and Joshua Loftus |
Ricardo Silva 🔗 |
Sat 2:25 a.m. - 2:30 a.m.
|
Questions: Invited talk, R. Silva
(
Live Q&A between the speaker and moderator
)
link »
Submit your questions in Rocket.chat and the moderator will convey them to the speaker. To ask questions live, please join the Zoom call. We highly encourage that you use rocketchat. Please join the speaker during the breakout sessions for more discussions. |
🔗 |
Sat 2:38 a.m. - 2:40 a.m.
|
Introduction to contributed talks
(
Introducing the contributed talks
)
Please join the authors on Gather.Town during the poster sessions for questions. Feel free to submit your questions on Rocketchat and the moderator will convey them to the authors. |
🔗 |
Sat 3:10 a.m. - 3:13 a.m.
|
Introduction to invited talk by Hoda Heidari
(
Introduction to speaker
)
|
🔗 |
Sat 3:45 a.m. - 3:55 a.m.
|
Short break -- Join us on Gathertown ( short break ) link » | 🔗 |
Sat 3:55 a.m. - 4:55 a.m.
|
Virtual Breakout Session 1
Please join the Zoom for breakout discussions. In case the Zoom is full, you can join the Breakouts through Gather.Town at the corresponding table. Fairness in Health: 11:55 AM - 12:55PM onlinequestions event ID: 12122001 Zoom: https://ucl.zoom.us/j/98811169765?pwd=SStyWFNmdFlUQUFnekt4Q2FWSXhYQT09 Q&A with Ricardo Silva: 11:55 AM - 12:55PM onlinequestions event ID: 12122002 Zoom: https://ucl.zoom.us/j/91814715763?pwd=dmZkWkh6ZmN4bWN3WjY2L0dpakE2Zz09 Q&A with Hoda Heidari: 11:55 AM - 12:55PM onlinequestions event ID: 12122003 https://us02web.zoom.us/j/89640547267?pwd=RzVZOW9ISmtaSmhLaE5BTFJnRFdtUT09 |
🔗 |
Sat 4:55 a.m. - 5:00 a.m.
|
Introduction to Poster session
(
Intro to posters
)
link »
Please join the gather.town for the poster session. |
🔗 |
Sat 7:55 a.m. - 8:00 a.m.
|
Introduction to invited talk by Jon Kleinberg
(
Introduction to speaker
)
|
🔗 |
Sat 8:32 a.m. - 8:40 a.m.
|
Questions: Invited talk, J. Kleinberg
(
Live Q&A between the speaker and moderator
)
link »
Submit your questions in Rocket.chat and the moderator will convey them to the speaker. To ask questions live, please join the Zoom call. We highly encourage that you use rocketchat. Please join the speaker during the breakout sessions for more discussions. |
🔗 |
Sat 9:15 a.m. - 9:18 a.m.
|
Introduction to invited talk by Lily Hu
(
Introduction to speaker
)
|
🔗 |
Author Information
Awa Dieng (Google)
My research interests span machine learning, causal inference, fairness, and interpretability.
Jessica Schrouff (Google Research)

I am a Senior Research Scientist at DeepMind since 2022. I joined Alphabet in 2019 as part of Google Research working on trustworthy machine learning for healthcare. Before that, I was a postdoctoral researcher at University College London and Stanford University studying machine learning for neuroscience. My current interests lie at the intersection of trustworthy machine learning and causality.
Matt Kusner (University College London)
Golnoosh Farnadi (Mila)
Fernando Diaz (Google)
Fernando Diaz is a research scientist at Google Brain Montréal. His research focuses on the design of information access systems, including search engines, music recommendation services and crisis response platforms is particularly interested in understanding and addressing the societal implications of artificial intelligence more generally. Previously, Fernando was the assistant managing director of Microsoft Research Montréal and a director of research at Spotify, where he helped establish its research organization on recommendation, search, and personalization. Fernando’s work has received awards at SIGIR, WSDM, ISCRAM, and ECIR. He is the recipient of the 2017 British Computer Society Karen Spärck Jones Award. Fernando has co-organized workshops and tutorials at SIGIR, WSDM, and WWW. He has also co-organized several NIST TREC initiatives, WSDM (2013), Strategic Workshop on Information Retrieval (2018), FAT* (2019), SIGIR (2021), and the CIFAR Workshop on Artificial Intelligence and the Curation of Culture (2019)
More from the Same Authors
-
2021 : Artsheets for Art Datasets »
Ramya Srinivasan · Emily Denton · Jordan Famularo · Negar Rostamzadeh · Fernando Diaz · Beth Coleman -
2021 : Certified Predictions using MPC-Friendly Publicly Verifiable Covertly Secure Commitments »
Nitin Agrawal · James Bell · Matt Kusner -
2021 : Maintaining fairness across distribution shifts: do we have viable solutions for real-world applications? »
Jessica Schrouff · Natalie Harris · Sanmi Koyejo · Ibrahim Alabdulmohsin · Eva Schnider · Diana Mincu · Christina Chen · Awa Dieng · Yuan Liu · Vivek Natarajan · Katherine Heller · Alexander D'Amour -
2022 : Exposure Fairness in Music Recommendation »
Rebecca Salganik · Fernando Diaz · Golnoosh Farnadi -
2022 : Mitigating Online Grooming with Federated Learning »
Khaoula Chehbouni · Gilles Caporossi · Reihaneh Rabbany · Martine De Cock · Golnoosh Farnadi -
2022 : Towards Private and Fair Federated Learning »
Sikha Pentyala · Nicola Neophytou · Anderson Nascimento · Martine De Cock · Golnoosh Farnadi -
2022 : Fair Targeted Immunization with Dynamic Influence Maximization »
Nicola Neophytou · Golnoosh Farnadi -
2022 : Early Detection of Sexual Predators with Federated Learning »
Khaoula Chehbouni · Gilles Caporossi · Reihaneh Rabbany · Martine De Cock · Golnoosh Farnadi -
2022 : Privacy-Preserving Group Fairness in Cross-Device Federated Learning »
Sikha Pentyala · Nicola Neophytou · Anderson Nascimento · Martine De Cock · Golnoosh Farnadi -
2022 : Striving for data-model efficiency: Identifying data externalities on group performance »
Esther Rolf · Ben Packer · Alex Beutel · Fernando Diaz -
2022 : Partial identification without distributional assumptions »
Kirtan Padh · Jakob Zeitler · David Watson · Matt Kusner · Ricardo Silva · Niki Kilbertus -
2022 Workshop: Cultures of AI and AI for Culture »
Alex Hanna · Rida Qadri · Fernando Diaz · Nick Seaver · Morgan Scheuerman -
2022 : Panel »
Hannah Korevaar · Manish Raghavan · Ashudeep Singh · Fernando Diaz · Chloé Bakalar · Alana Shine -
2022 : Q & A »
Golnoosh Farnadi · Elliot Creager · Q.Vera Liao -
2022 : Tutorial part 1 »
Golnoosh Farnadi -
2022 Tutorial: Algorithmic fairness: at the intersections »
Golnoosh Farnadi · Q.Vera Liao · Elliot Creager -
2022 : Opening remarks »
Awa Dieng -
2022 Workshop: Algorithmic Fairness through the Lens of Causality and Privacy »
Awa Dieng · Miriam Rateike · Golnoosh Farnadi · Ferdinando Fioretto · Matt Kusner · Jessica Schrouff -
2022 Poster: Diagnosing failures of fairness transfer across distribution shift in real-world medical settings »
Jessica Schrouff · Natalie Harris · Sanmi Koyejo · Ibrahim Alabdulmohsin · Eva Schnider · Krista Opsahl-Ong · Alexander Brown · Subhrajit Roy · Diana Mincu · Christina Chen · Awa Dieng · Yuan Liu · Vivek Natarajan · Alan Karthikesalingam · Katherine Heller · Silvia Chiappa · Alexander D'Amour -
2022 Poster: A Reduction to Binary Approach for Debiasing Multiclass Datasets »
Ibrahim Alabdulmohsin · Jessica Schrouff · Sanmi Koyejo -
2022 Poster: Local Latent Space Bayesian Optimization over Structured Inputs »
Natalie Maus · Haydn Jones · Juston Moore · Matt Kusner · John Bradshaw · Jacob Gardner -
2022 Poster: When Do Flat Minima Optimizers Work? »
Jean Kaddour · Linqing Liu · Ricardo Silva · Matt Kusner -
2021 Workshop: Algorithmic Fairness through the lens of Causality and Robustness »
Jessica Schrouff · Awa Dieng · Golnoosh Farnadi · Mark Kwegyir-Aggrey · Miriam Rateike -
2021 : Opening remarks »
Awa Dieng -
2021 Poster: Causal Effect Inference for Structured Treatments »
Jean Kaddour · Yuchen Zhu · Qi Liu · Matt Kusner · Ricardo Silva -
2020 : AFCI2020: Closing remarks and Summary of Discussions »
Jessica Schrouff -
2020 Workshop: Machine Learning for Molecules »
José Miguel Hernández-Lobato · Matt Kusner · Brooks Paige · Marwin Segler · Jennifer Wei -
2020 : AFCI2020: Opening remarks »
Awa Dieng -
2020 Poster: A Class of Algorithms for General Instrumental Variable Models »
Niki Kilbertus · Matt Kusner · Ricardo Silva -
2020 Poster: Barking up the right tree: an approach to search over molecule synthesis DAGs »
John Bradshaw · Brooks Paige · Matt Kusner · Marwin Segler · José Miguel Hernández-Lobato -
2020 Spotlight: Barking up the right tree: an approach to search over molecule synthesis DAGs »
John Bradshaw · Brooks Paige · Matt Kusner · Marwin Segler · José Miguel Hernández-Lobato -
2020 : Responsible AI for healthcare at Google »
Jessica Schrouff -
2020 Tutorial: (Track2) Beyond Accuracy: Grounding Evaluation Metrics for Human-Machine Learning Systems Q&A »
Praveen Chandar · Fernando Diaz · Brian St. Thomas -
2020 Poster: Counterexample-Guided Learning of Monotonic Neural Networks »
Aishwarya Sivaraman · Golnoosh Farnadi · Todd Millstein · Guy Van den Broeck -
2020 Tutorial: (Track2) Beyond Accuracy: Grounding Evaluation Metrics for Human-Machine Learning Systems »
Praveen Chandar · Fernando Diaz · Brian St. Thomas -
2019 Poster: A Model to Search for Synthesizable Molecules »
John Bradshaw · Brooks Paige · Matt Kusner · Marwin Segler · José Miguel Hernández-Lobato -
2016 Demonstration: Project Malmo - Minecraft for AI Research »
Katja Hofmann · Matthew A Johnson · Fernando Diaz · Alekh Agarwal · Tim Hutton · David Bignell · Evelyne Viegas