`

Timezone: »

 
Workshop
Algorithmic Fairness through the lens of Causality and Robustness
Jessica Schrouff · Awa Dieng · Golnoosh Farnadi · Mark Kwegyir-Aggrey · Miriam Rateike

Mon Dec 13 01:00 AM -- 12:30 PM (PST) @ None
Event URL: https://www.afciworkshop.org/afcr2021 »

Trustworthy machine learning (ML) encompasses multiple fields of research, including (but not limited to) robustness, algorithmic fairness, interpretability and privacy. Recently, relationships between techniques and metrics used across different fields of trustworthy ML have emerged, leading to interesting work at the intersection of algorithmic fairness, robustness, and causality.

On one hand, causality has been proposed as a powerful tool to address the limitations of initial statistical definitions of fairness. However, questions have emerged regarding the applicability of such approaches in practice and the suitability of a causal framing for studies of bias and discrimination. On the other hand, the Robustness literature has surfaced promising approaches to improve fairness in ML models. For instance, parallels can be shown between individual fairness and local robustness guarantees. In addition, the interactions between fairness and robustness can help us understand how fairness guarantees hold under distribution shift or adversarial/poisoning attacks.

After a first edition of this workshop that focused on causality and interpretability, we will turn to the intersectionality between algorithmic fairness and recent techniques in causality and robustness. In this context, we will investigate how these different topics relate, but also how they can augment each other to provide better or more suited definitions and mitigation strategies for algorithmic fairness. We are particularly interested in addressing open questions in the field, such as:
- How can causally grounded fairness methods help develop more robust and fair algorithms in practice?
- What is an appropriate causal framing in studies of discrimination?
- How do approaches for adversarial/poisoning attacks target algorithmic fairness?
- How do fairness guarantees hold under distribution shift?

Mon 3:20 a.m. - 3:30 a.m.
  

We survey the many roles that causal reasoning plays in reasoning about fairness in machine learning. While the existing scholarship on causal approaches to fairness in machine learning has focused on the degree to which features in a model might have been causally affected by (discrimination on the basis of) sensitive features, causal reasoning also plays an important---if more implicit---role in other ways of assessing the fairness of models. This paper therefore tries to distinguish and disentangle the many roles that causal reasoning plays in reasoning about fairness, with the additional goal of asking how causality is thought to help achieve these normative goals and to what extent this is possible or necessary.

Irene Y Chen · Hal Daumé III · Solon Barocas
Mon 3:30 a.m. - 3:40 a.m.

Addressing fairness concerns about machine learning models is a crucial step towards their long-term adoption in real-world automated systems. Many approaches for training fair models from data have been developed and an implicit assumption about such algorithms is that they are able to recover a fair model, despite potential historical biases in the data. In this work we show a number of impossibility results that indicate that there is no learning algorithm that can recover a fair model when a proportion of the dataset is subject to arbitrary manipulations. Specifically, we prove that there are situations in which an adversary can force any learner to return a biased classifier, with or without degrading accuracy, and that the strength of this bias increases for learning problems with underrepresented protected groups in the data. Our results emphasize on the importance of studying further data corruption models of various strength and of establishing stricter data collection practices for fairness-aware learning.

Nikola Konstantinov · Christoph Lampert
Mon 6:06 a.m. - 6:09 a.m.
[ Visit Poster at Spot A1 in Virtual World ]   

While conventional ranking systems focus solely on maximizing the utility of the ranked items to users, fairness-aware ranking systems additionally try to balance the exposure for different protected attributes such as gender or race. To achieve this type of group fairness for ranking, we derive a new ranking system based on the first principles of distributional robustness. We formulate a minimax game between a player choosing a distribution over rankings to maximize utility while satisfying fairness constraints against an adversary seeking to minimize utility while matching statistics of the training data. We show that our approach provides better utility for highly fair rankings than existing baseline methods.

Omid Memarrast · Ashkan Rezaei · Rizal Fathony · Brian Ziebart
Mon 6:12 a.m. - 6:15 a.m.
[ Visit Poster at Spot A2 in Virtual World ]   

We study fairness through the lens of cooperative multi-agent learning. Our work is motivated by empirical evidence that naive maximization of team reward yields unfair outcomes for individual team members. To address fairness in multi-agent contexts, we introduce team fairness, a group-based fairness measure for multi-agent learning. We then prove that it is possible to enforce team fairness during policy optimization by transforming the team's joint policy into an equivariant map. We refer to our multi-agent learning strategy as Fairness through Equivariance (Fair-E) and demonstrate its effectiveness empirically. We then introduce Fairness through Equivariance Regularization (Fair-ER) as a soft-constraint version of Fair-E and show that it reaches higher levels of utility than Fair-E and fairer outcomes than non-equivariant policies. Finally, we present novel findings regarding the fairness-utility trade-off in multi-agent settings; showing that the magnitude of the trade-off is dependent on agent skill level.

Niko Grupen · Bart Selman · Daniel Lee
Mon 6:15 a.m. - 6:18 a.m.
[ Visit Poster at Spot A3 in Virtual World ]   

As the use of deep learning in high impact domains becomes ubiquitous, it is increasingly important to assess the resilience of models. One such high impact domain is that of face recognition, with real world applications involving images affected by various degradations, such as motion blur or high exposure. Moreover, images captured across different attributes, such as gender and race, can also challenge the robustness of a face recognition algorithm. While summary statistics suggest that the aggregate performance of face recognition models has continued to improve, these metrics do not directly measure the robustness or fairness of the models. Visual Psychophysics Sensitivity Analysis (VPSA) [1] provides a way to pinpoint the individual causes of failure by way of introducing incremental perturbations in the data. However, perturbations may affect subgroups differently. In this paper, we propose a new fairness evaluation based on robustness in the form of a generic framework that extends VPSA. With this framework, we can analyze the ability of a model to perform fairly for different subgroups of a population affected by perturbations, and pinpoint the exact failure modes for a subgroup by measuring targeted robustness. With the increasing focus on the fairness of face recognition algorithms, we use face recognition as an example application of our framework and propose to represent the fairness of a model via AUC matrices. We analyze the performance of common face recognition models and empirically show that certain subgroups may be at a disadvantage when images are perturbed.

Aparna Joshi · Xavier Suau Cuadros · Nivedha Sivakumar · Luca Zappella · Nicholas Apostoloff
Mon 6:21 a.m. - 6:24 a.m.
[ Visit Poster at Spot A4 in Virtual World ]

AI systems come with serious concerns of bias and fairness. Algorithmic bias is more abstract and unintuitive than traditional forms of discrimination and can be more difficult to detect and mitigate. A clear gap exists in the current literature on evaluating the relative bias in the performance of multi-class classifiers. In this work, we propose two simple yet effective metrics, Combined Error Variance (CEV) and Symmetric Distance Error (SDE), to quantitatively evaluate the class-wise bias of two models in comparison to one another. We evaluate the performance of these new metrics by demonstrating practical use cases with pre-trained models and show that they can be used to measure fairness as well as bias.

Ziliang Zong · Cody Blakeney · Gentry Atkinson · Nathaniel Huish · · Vangelis Metsis
Mon 6:24 a.m. - 6:27 a.m.
[ Visit Poster at Spot A5 in Virtual World ]   

We study the \emph{transferability of fair predictors} (i.e., classifiers or regressors) assuming domain adaptation. Given a predictor that is “fair” on some \emph{source} distribution (of features and labels), is it still fair on a \emph{realized} distribution that differs? We first generalize common notions of static, statistical group-level fairness to a family of premetric functions that measure “induced disparity.” We quantify domain adaptation by bounding group-specific statistical divergences between the source and realized distributions. Next, we explore cases of simplifying assumptions for which bounds on domain adaptation imply bounds on changes to induced disparity. We provide worked examples for two commonly used fairness definitions (i.e., demographic parity and equalized odds) and models of domain adaptation (i.e., covariate shift and label shift) that prove to be special cases of our general method. Finally, we validate our theoretical results with synthetic data.

Reilly Raab · Yatong Chen · Yang Liu
Mon 6:27 a.m. - 6:30 a.m.
[ Visit Poster at Spot A6 in Virtual World ]   

Unfairness in mortgage lending has created generational inequality among racial and ethnic groups in the US. Many studies address this problem, but most existing work focuses on correlation-based techniques. In our work, we use the framework of counterfactual fairness to train fair machine learning models. We propose a new causal graph for the variables available in the Home Mortgage Disclosure Act (HMDA) data. We use a matching-based approach instead of the latent variable modeling approach, because the former approach does not rely on any modeling assumptions. Furthermore, matching provides us with counterfactual pairs in which the race variable is isolated. We first demonstrate the unfairness in mortgage approval and interest rates between African-American and non-Hispanic White sub-populations. Then, we show that having balanced data using matching does not guarantee perfect counterfactual fairness of the machine learning models.

Sama Ghoba · Nathan Colaner
Mon 6:30 a.m. - 6:33 a.m.
[ Visit Poster at Spot B0 in Virtual World ]   

To address discrimination and inequality in automated decision making systems it is standard practice to implement so-called ``fairness" metrics during algorithm design. These measures, although useful to enforce and diagnose fairness at the decision stage, are not sufficient to capture forms of discrimination arising throughout and from structural properties of the system as a whole. To complement the standard approach, we propose a systemic analysis, aided by structural causal models, through which social interventions can be compared to algorithmic interventions. This framework allows us to identify bias outside the algorithmic stage, and propose joint interventions on social dynamics and algorithm design. We show how, for a model of financial lending, structural interventions can drive the system towards equality even when algorithmic interventions are not able to do so. This means the responsibility of decision makers does not stop when local fairness metrics are satisfied, they must ensure a whole ecosystem that fosters equity for all.

efren cruz · Sarah Rajtmajer · Debashis Ghosh
Mon 6:33 a.m. - 6:36 a.m.
[ Visit Poster at Spot B1 in Virtual World ]

The Invariant Risk Minimization (IRM) framework aims to learn invariant features for out-of-distribution generalization with the assumption that the underlying causal mechanisms remain constant. In other words, environments should sufficiently overlap'' for finding meaningful invariant features. However, there are cases where theoverlap'' assumption may not hold and further, the assignment of the training samples to different environments is not known a priori. We believe that such cases arise naturally in networked settings and hierarchical data-generating models, wherein the IRM performance degrades. To mitigate this failure case, we argue for a partial invariance framework that minimizes risk fairly across environments. This introduces flexibility into the IRM framework by partitioning the environments based on hierarchical differences, while introducing invariance locally within the partitions. We motivate this framework in classification settings where distribution shifts vary across environments. Our results show the capability of the partial invariant risk minimization to alleviate the trade-off between fairness and risk at different distribution shifts settings.

Moulik Choraria · Ibtihal Ferwana · Ankur Mani · Lav Varshney
Mon 6:36 a.m. - 6:39 a.m.
[ Visit Poster at Spot B2 in Virtual World ]   

Ethics and societal implications of automated decision making have become a major theme in Machine Learning research. Conclusions from theoretical studies in this area are often stated in general terms (such as affirmative action possibly hurting all groups or fairness measures being incompatible with a decision maker being rational). Our work aims to highlight the degree to which such conclusions are in fact relying on modeled beliefs as well as on the technicalities of a chosen framework of analysis (eg. statistical learning theory, game theory, dynamics etc). We carefully discuss prior work through this lens and then highlight the effect of modeled beliefs by means of a simple statistical model where an observed score X is the result of two unobserved hidden variables ("talent" T and "environment" E). We assume that variable T is identically distributed for two subgroups of a population while E models the disparities between an advantaged and a disadvantaged group. We analyze (Bayes-)optimal decision making under a variety of distributional assumptions and show that even the simple model under consideration exhibits some counterintuitive effects.

Ruth Urner · Jeff Edmonds · Karan None Singh
Mon 6:39 a.m. - 6:42 a.m.
[ Visit Poster at Spot B3 in Virtual World ]   

Clustering algorithms are ubiquitous in modern data science pipelines, and are utilized in numerous fields ranging from biology to facility location. Due to their widespread use, especially in societal resource allocation problems, recent research has aimed at making clustering algorithms fair, with great success. Furthermore, it has also been shown that clustering algorithms, much like other machine learning algorithms, are susceptible to adversarial attacks where a malicious entity seeks to subvert the performance of the learning algorithm. However, despite these known vulnerabilities, there has been no research undertaken that investigates fairness degrading adversarial attacks for clustering. We seek to bridge this gap by formulating a generalized attack optimization problem aimed at worsening the group-level fairness of centroid-based clustering algorithms. As a first step, we propose a fairness degrading attack algorithm for k-median clustering that operates under a whitebox threat model-- where the clustering algorithm, fairness notion, and the input dataset are known to the adversary. We provide empirical results as well as theoretical analysis for our simple attack algorithm, and find that the addition of the generated adversarial samples can lead to significantly lower fairness values. In this manner, we aim to motivate fairness degrading adversarial attacks as a direction for future research in fair clustering.

Anshuman Chhabra · Adish Singla · Prasant Mohapatra
Mon 9:30 a.m. - 9:40 a.m.
  

Machine learning systems based on minimizing average error have been shown to perform inconsistently across notable subsets of the data, which is not exposed by a low average error for the entire dataset. In consequential social and economic applications, where data represent people, this can lead to discrimination of underrepresented gender and ethnic groups. Distributionally Robust Optimization (DRO) seemingly addresses this problem by minimizing the worst expected risk across subpopulations. We establish theoretical results that clarify the relation between DRO and the optimization of the same loss averaged on an adequately weighted training dataset. A practical implication of our results is that neither DRO nor curating the training set should be construed as a complete solution for bias mitigation.

Agnieszka Słowik · Leon Bottou
Mon 9:40 a.m. - 9:50 a.m.

In spite of considerable practical importance, current algorithmic fairness literature lacks technical methods to account for underlying geographic dependency while evaluating or mitigating bias issues for spatial data. We initiate the study of bias in spatial applications in this paper, taking the first step towards formalizing this line of quantitative methods. Bias in spatial data applications often gets confounded by underlying spatial autocorrelation. We propose hypothesis testing methodology to detect the presence and strength of this effect, then account for it by using a spatial filtering-based approach---in order to enable application of existing bias detection metrics. We evaluate our proposed methodology through numerical experiments on real and synthetic datasets, demonstrating that in the presence of several types of confounding effects due to the underlying spatial structure our testing methods perform well in maintaining low type-II errors and nominal type-I errors.

Subhabrata Majumdar · Cheryl Flynn · Ritwik Mitra
Mon 9:50 a.m. - 10:00 a.m.

Clustering algorithms are widely utilized for many modern data science applications. This motivates the need to make outputs of clustering algorithms fair. Traditionally, new fair algorithmic variants to clustering algorithms are developed for specific notions of fairness. However, depending on the application context, different definitions of fairness might need to be employed. As a result, new algorithms and analysis need to be proposed for each combination of clustering algorithm and fairness definition. Additionally, each new algorithm would need to be reimplemented for deployment in a real-world system. Hence, we propose an alternate approach to group-level fairness in center-based clustering inspired by research on data poisoning attacks. We seek to augment the original dataset with a small number of data points, called antidote data. When clustering is undertaken on this new dataset, the output is fair, for the chosen clustering algorithm and fairness definition. We formulate this as a general bi-level optimization problem which can accommodate any center-based clustering algorithms and fairness notions. We then categorize approaches for solving this bi-level optimization for two different problem settings. Extensive experiments on different clustering algorithms and fairness notions show that our algorithms can achieve desired levels of fairness on many real-world datasets with a very small percentage of antidote data added. We also find that our algorithms achieve lower fairness costs and competitive clustering performance compared to other state-of-the-art fair clustering algorithms.

Anshuman Chhabra · Adish Singla · Prasant Mohapatra

Author Information

Jessica Schrouff (Google Research)
Awa Dieng (Google)
Golnoosh Farnadi (Mila)
Kweku Kwegyir-Aggrey (Brown)
Miriam Rateike (Max Planck Institute for Intelligent Systems, Tübingen, Germany)

More from the Same Authors