Timezone: »
Fairness and robustness are often considered as orthogonal dimensions to evaluate machine learning on. Recent evidence has however displayed that fairness guarantees are not transferable across environments. In healthcare settings, this can result in e.g. a model that performs fairly (according to a selected metric) in hospital A showing unfairness when deployed in hospital B. Here we illustrate how fairness metrics may change under distribution shift using 2 real-world applications in Electronic Health Records (EHR) and Dermatology. We further show that clinically plausible shifts simultaneously affect multiple parts of the data generation process through a causal analysis. Such complex shifts invalidate most assumptions required by current mitigation techniques, which typically target either covariate or label shift. Our work hence displays a technical gap to a realistic problem and hopes to elicit further research at the intersection of fairness and robustness in real-world applications.
Author Information
Jessica Schrouff (Google Research)
Natalie Harris (Google)
Sanmi Koyejo (UIUC)
Ibrahim Alabdulmohsin (Google Research)
Eva Schnider (University of Basel)
Diana Mincu (Google)
Christina Chen (Google)
Awa Dieng (Google)
Yuan Liu (google inc)
Vivek Natarajan (Google Brain)
Researcher working at the intersection of AI and healthcare at Google. Research interests include improving data efficiency, robustness, generalization, safety, fairness and privacy of AI systems.
Katherine Heller (Google)
Alexander D'Amour (Google Brain)
More from the Same Authors
-
2021 : Disability prediction in multiple sclerosis using performance outcome measures and demographic data »
Diana Mincu -
2021 Workshop: Algorithmic Fairness through the lens of Causality and Robustness »
Jessica Schrouff · Awa Dieng · Golnoosh Farnadi · Mark Kwegyir-Aggrey · Miriam Rateike -
2021 Poster: A Near-Optimal Algorithm for Debiasing Trained Machine Learning Models »
Ibrahim Alabdulmohsin · Mario Lucic -
2020 : AFCI2020: Closing remarks and Summary of Discussions »
Jessica Schrouff -
2020 Workshop: Algorithmic Fairness through the Lens of Causality and Interpretability »
Awa Dieng · Jessica Schrouff · Matt Kusner · Golnoosh Farnadi · Fernando Diaz -
2020 : Responsible AI for healthcare at Google »
Jessica Schrouff -
2020 Poster: What Do Neural Networks Learn When Trained With Random Labels? »
Hartmut Maennel · Ibrahim Alabdulmohsin · Ilya Tolstikhin · Robert Baldock · Olivier Bousquet · Sylvain Gelly · Daniel Keysers -
2020 Spotlight: What Do Neural Networks Learn When Trained With Random Labels? »
Hartmut Maennel · Ibrahim Alabdulmohsin · Ilya Tolstikhin · Robert Baldock · Olivier Bousquet · Sylvain Gelly · Daniel Keysers -
2019 : Coffee break, posters, and 1-on-1 discussions »
Julius von Kügelgen · David Rohde · Candice Schumann · Grace Charles · Victor Veitch · Vira Semenova · Mert Demirer · Vasilis Syrgkanis · Suraj Nair · Aahlad Puli · Masatoshi Uehara · Aditya Gopalan · Yi Ding · Ignavier Ng · Khashayar Khosravi · Eli Sherman · Shuxi Zeng · Aleksander Wieczorek · Hao Liu · Kyra Gan · Jason Hartford · Miruna Oprescu · Alexander D'Amour · Jörn Boehnke · Yuta Saito · Théophile Griveau-Billion · Chirag Modi · Shyngys Karimov · Jeroen Berrevoets · Logan Graham · Imke Mayer · Dhanya Sridhar · Issa Dahabreh · Alan Mishler · Duncan Wadsworth · Khizar Qureshi · Rahul Ladhania · Gota Morishita · Paul Welle -
2019 : Contributed talk 2 »
Divyat Mahajan · Khashayar Khosravi · Alexander D'Amour -
2019 : Molecules and Genomes »
David Haussler · Djork-Arné Clevert · Michael Keiser · Alan Aspuru-Guzik · David Duvenaud · David Jones · Jennifer Wei · Alexander D'Amour -
2019 Poster: Reconciling meta-learning and continual learning with online mixtures of tasks »
Ghassen Jerfel · Erin Grant · Tom Griffiths · Katherine Heller -
2019 Spotlight: Reconciling meta-learning and continual learning with online mixtures of tasks »
Ghassen Jerfel · Erin Grant · Tom Griffiths · Katherine Heller