Timezone: »
Poster ID: 7
Abstract: Training datasets for machine learning often have some form of missingness. For example, to learn a model for deciding whom to give a loan, the available training data includes individuals who were given a loan in the past, but not those who were not. This missingness, if ignored, nullifies any fairness guarantee of the training procedure when the model isdeployed. Using causal graphs, we characterize the missingness mechanisms in different real-world scenarios. We show conditions under which various distributions, used in popular fairness algorithms, can or can not be recovered from the training data. Our theoretical results imply that many of these algorithms can not guarantee fairness in practice. Modeling missingness also helps to identify correct design principles for fair algorithms. For example, in multi-stage settings where decisions are made in multiple screening rounds, we use our framework to derive the minimal distributions required to design a fair algorithm. Our proposed algorithm also decentralizes the decision-making process and still achieves similar performance to the optimal algorithm that requires centralization and non-recoverable distributions.
Author Information
Naman Goel (ETHZ)

Naman is a researcher at the University of Oxford (affiliated with the Department of Computer Science and the Human-Centered Computing group). He is involved in the Oxford Martin Programme on Ethical Web and Data Architectures. He earned his Ph.D. at the School of Computer and Communication Sciences, EPFL, and his undergraduate (and integrated master's) degree at the Indian Institute of Technology (IIT), Varanasi. His research interests include algorithmic fairness, responsible AI, game theoretic incentive design, privacy preserving data architectures etc. He has also worked at ETH Zürich, Microsoft Research, Qatar Computing Research Institute, INRIA (France), Centro de Informática (Brazil), and IIT Kharagpur.
Amit Deshpande (Microsoft Research)
More from the Same Authors
-
2022 : Generating Intuitive Fairness Specifications for Natural Language Processing »
Florian E. Dorner · Momchil Peychev · Nikola Konstantinov · Naman Goel · Elliott Ash · Martin Vechev -
2023 Poster: Causal Effect Regularization: Automated Detection and Removal of Spurious Attributes »
Abhinav Kumar · Amit Deshpande · Amit Sharma -
2023 Poster: WCLD: Curated Large Dataset of Criminal Cases from Wisconsin Circuit Courts »
Nianyun Li · Naman Goel · Peiyao Sun · Claudia Marangon · Elliott Ash -
2021 Poster: Can we have it all? On the Trade-off between Spatial and Adversarial Robustness of Neural Networks »
Sandesh Kamath · Amit Deshpande · Subrahmanyam Kambhampati Venkata · Vineeth N Balasubramanian -
2016 Poster: Batched Gaussian Process Bandit Optimization via Determinantal Point Processes »
Tarun Kathuria · Amit Deshpande · Pushmeet Kohli