Timezone: »
As machine learning models permeate every aspect of decision making systems in consequential areas such as healthcare, banking, hiring and education, it has become critical for these models to satisfy trustworthiness desiderata such as fairness, privacy, robustness and interpretability. Initially studied in isolation, recent work has emerged at the intersection of these different fields of research, leading to interesting questions on how fairness can be achieved under privacy, interpretability and robustness constraints. Given the interesting questions that emerge at the intersection of these different fields, this tutorial aims to investigate how these different topics relate, and how they can augment each other to provide better or more suited definitions and mitigation strategies for algorithmic fairness. We are particularly interested in addressing open questions in the field, such as: how algorithmic fairness is compatible with privacy constraints? What are the trade-offs when we consider algorithmic fairness at the intersection of robustness? Can we develop fair and explainable models? We will also articulate some limitations of technical approaches to algorithmic fairness, and discuss critiques that are coming from outside of computer science.
Mon 11:00 a.m. - 11:05 a.m.
|
Welcome and introduction
(
tutorial part 0
)
SlidesLive Video » |
Elliot Creager 🔗 |
Mon 11:05 a.m. - 11:50 a.m.
|
Tutorial part 1
(
tutorial part 1
)
SlidesLive Video » Introduction to fairness Introduction to privacy At the intersections: fairness and privacy |
Golnoosh Farnadi 🔗 |
Mon 11:50 a.m. - 12:15 p.m.
|
Tutorial part 2
(
tutorial part 2
)
SlidesLive Video » Introduction to robustness At the intersections: fairness and robustness |
Elliot Creager 🔗 |
Mon 12:15 p.m. - 12:45 p.m.
|
Tutorial part 3
(
tutorial part 3
)
SlidesLive Video » Introduction to explainability At the intersections: fairness and explainability |
Q.Vera Liao 🔗 |
Mon 12:45 p.m. - 1:00 p.m.
|
Q & A
(
questions
)
SlidesLive Video » |
Golnoosh Farnadi · Elliot Creager · Q.Vera Liao 🔗 |
Mon 1:00 p.m. - 1:05 p.m.
|
Break to welcome panellists
|
🔗 |
Mon 1:05 p.m. - 1:30 p.m.
|
Panel
SlidesLive Video » |
Ferdinando Fioretto · Amir-Hossein Karimi · Pratyusha Kalluri · Reza Shokri · Elizabeth Watkins · Su Lin Blodgett 🔗 |
Author Information
Golnoosh Farnadi (Mila)
Q.Vera Liao (Microsoft)
Elliot Creager (University of Toronto)
More from the Same Authors
-
2022 : Exposure Fairness in Music Recommendation »
Rebecca Salganik · Fernando Diaz · Golnoosh Farnadi -
2022 : Mitigating Online Grooming with Federated Learning »
Khaoula Chehbouni · Gilles Caporossi · Reihaneh Rabbany · Martine De Cock · Golnoosh Farnadi -
2022 : Towards Private and Fair Federated Learning »
Sikha Pentyala · Nicola Neophytou · Anderson Nascimento · Martine De Cock · Golnoosh Farnadi -
2022 : Fair Targeted Immunization with Dynamic Influence Maximization »
Nicola Neophytou · Golnoosh Farnadi -
2022 : Early Detection of Sexual Predators with Federated Learning »
Khaoula Chehbouni · Gilles Caporossi · Reihaneh Rabbany · Martine De Cock · Golnoosh Farnadi -
2022 : Privacy-Preserving Group Fairness in Cross-Device Federated Learning »
Sikha Pentyala · Nicola Neophytou · Anderson Nascimento · Martine De Cock · Golnoosh Farnadi -
2022 : Generation Probabilities are Not Enough: Improving Error Highlighting for AI Code Suggestions »
Helena Vasconcelos · Gagan Bansal · Adam Fourney · Q.Vera Liao · Jennifer Wortman Vaughan -
2023 Workshop: Algorithmic Fairness through the Lens of Time »
Awa Dieng · Miriam Rateike · Golnoosh Farnadi · Ferdinando Fioretto · Jessica Schrouff -
2022 Workshop: HCAI@NeurIPS 2022, Human Centered AI »
Michael Muller · Plamen P Angelov · Hal Daumé III · Shion Guha · Q.Vera Liao · Nuria Oliver · David Piorkowski -
2022 : Q & A »
Golnoosh Farnadi · Elliot Creager · Q.Vera Liao -
2022 : Tutorial part 3 »
Q.Vera Liao -
2022 : Tutorial part 2 »
Elliot Creager -
2022 : Tutorial part 1 »
Golnoosh Farnadi -
2022 : Welcome and introduction »
Elliot Creager -
2022 Workshop: Algorithmic Fairness through the Lens of Causality and Privacy »
Awa Dieng · Miriam Rateike · Golnoosh Farnadi · Ferdinando Fioretto · Matt Kusner · Jessica Schrouff -
2022 Poster: MoCoDA: Model-based Counterfactual Data Augmentation »
Silviu Pitis · Elliot Creager · Ajay Mandlekar · Animesh Garg -
2021 Workshop: Algorithmic Fairness through the lens of Causality and Robustness »
Jessica Schrouff · Awa Dieng · Golnoosh Farnadi · Mark Kwegyir-Aggrey · Miriam Rateike -
2020 : Contributed talks 5: Fairness and Robustness in Invariant Learning: A Case Study in Toxicity Classification »
Elliot Creager · David Madras · Richard Zemel -
2020 Workshop: Algorithmic Fairness through the Lens of Causality and Interpretability »
Awa Dieng · Jessica Schrouff · Matt Kusner · Golnoosh Farnadi · Fernando Diaz -
2020 Workshop: Resistance AI Workshop »
Suzanne Kite · Mattie Tesfaldet · J Khadijah Abdurahman · William Agnew · Elliot Creager · Agata Foryciarz · Raphael Gontijo Lopes · Pratyusha Kalluri · Marie-Therese Png · Manuel Sabin · Maria Skoularidou · Ramon Vilarino · Rose Wang · Sayash Kapoor · Micah Carroll -
2020 Poster: Counterexample-Guided Learning of Monotonic Neural Networks »
Aishwarya Sivaraman · Golnoosh Farnadi · Todd Millstein · Guy Van den Broeck -
2020 Poster: Counterfactual Data Augmentation using Locally Factored Dynamics »
Silviu Pitis · Elliot Creager · Animesh Garg