Timezone: »
We show that deep networks trained to satisfy demographic parity often do so through a form of race or gender awareness, and that the more we force a network to be fair, the more accurately we can recover race or gender from the internal state of the network. Based on this observation, we investigate an alternative fairness approach: we add a second classification head to the network to explicitly predict the protected attribute (such as race or gender) alongside the original task. After training the two-headed network, we enforce demographic parity by merging the two heads, creating a network with the same architecture as the original network. We establish a close relationship between existing approaches and our approach by showing (1) that the decisions of a fair classifier are well-approximated by our approach, and (2) that an unfair and optimally accurate classifier can be recovered from a fair classifier and our second head predicting the protected attribute. We use our explicit formulation to argue that the existing fairness approaches, just as ours, demonstrate disparate treatment and that they are likely to be unlawful in a wide range of scenarios under US law.
Author Information
Michael Lohaus (University of Tübingen)
Matthäus Kleindessner (Amazon AWS)
Krishnaram Kenthapadi (Fiddler AI)
Francesco Locatello (Amazon)
Chris Russell (Amazon Web Services)
More from the Same Authors
-
2022 : Scalable Causal Discovery with Score Matching »
Francesco Montagna · Nicoletta Noceti · Lorenzo Rosasco · Kun Zhang · Francesco Locatello -
2022 : A Human-Centric Take on Model Monitoring »
Murtuza Shergadwala · Himabindu Lakkaraju · Krishnaram Kenthapadi -
2022 Spotlight: Lightning Talks 5B-3 »
Yanze Wu · Jie Xiao · Nianzu Yang · Jieyi Bi · Jian Yao · Yiting Chen · Qizhou Wang · Yangru Huang · Yongqiang Chen · Peixi Peng · Yuxin Hong · Xintao Wang · Feng Liu · Yining Ma · Qibing Ren · Xueyang Fu · Yonggang Zhang · Kaipeng Zeng · Jiahai Wang · GEN LI · Yonggang Zhang · Qitian Wu · Yifan Zhao · Chiyu Wang · Junchi Yan · Feng Wu · Yatao Bian · Xiaosong Jia · Ying Shan · Zhiguang Cao · Zheng-Jun Zha · Guangyao Chen · Tianjun Xiao · Han Yang · Jing Zhang · Jinbiao Chen · MA Kaili · Yonghong Tian · Junchi Yan · Chen Gong · Tong He · Binghui Xie · Yuan Sun · Francesco Locatello · Tongliang Liu · Yeow Meng Chee · David P Wipf · Tongliang Liu · Bo Han · Bo Han · Yanwei Fu · James Cheng · Zheng Zhang -
2022 Spotlight: Self-supervised Amodal Video Object Segmentation »
Jian Yao · Yuxin Hong · Chiyu Wang · Tianjun Xiao · Tong He · Francesco Locatello · David P Wipf · Yanwei Fu · Zheng Zhang -
2022 Poster: Neural Attentive Circuits »
Martin Weiss · Nasim Rahaman · Francesco Locatello · Chris Pal · Yoshua Bengio · Bernhard Schölkopf · Erran Li Li · Nicolas Ballas -
2022 Poster: Assaying Out-Of-Distribution Generalization in Transfer Learning »
Florian Wenzel · Andrea Dittadi · Peter Gehler · Carl-Johann Simon-Gabriel · Max Horn · Dominik Zietlow · David Kernert · Chris Russell · Thomas Brox · Bernt Schiele · Bernhard Schölkopf · Francesco Locatello -
2022 Poster: Self-supervised Amodal Video Object Segmentation »
Jian Yao · Yuxin Hong · Chiyu Wang · Tianjun Xiao · Tong He · Francesco Locatello · David P Wipf · Yanwei Fu · Zheng Zhang -
2021 Poster: Backward-Compatible Prediction Updates: A Probabilistic Approach »
Frederik Träuble · Julius von Kügelgen · Matthäus Kleindessner · Francesco Locatello · Bernhard Schölkopf · Peter Gehler -
2020 Poster: What Did You Think Would Happen? Explaining Agent Behaviour through Intended Outcomes »
Herman Yau · Chris Russell · Simon Hadfield -
2019 Poster: Fixing Implicit Derivatives: Trust-Region Based Learning of Continuous Energy Functions »
Chris Russell · Matteo Toso · Neill Campbell -
2017 Poster: VEEGAN: Reducing Mode Collapse in GANs using Implicit Variational Learning »
Akash Srivastava · Lazar Valkov · Chris Russell · Michael Gutmann · Charles Sutton -
2017 Poster: Counterfactual Fairness »
Matt Kusner · Joshua Loftus · Chris Russell · Ricardo Silva -
2017 Oral: Counterfactual Fairness »
Matt Kusner · Joshua Loftus · Chris Russell · Ricardo Silva -
2017 Poster: When Worlds Collide: Integrating Different Counterfactual Assumptions in Fairness »
Chris Russell · Matt Kusner · Joshua Loftus · Ricardo Silva