Skip to yearly menu bar Skip to main content

Workshop: Algorithmic Fairness through the Lens of Time

Are computational interventions to advance fair lending robust to different modeling choices about the nature of lending?

Benjamin Laufer · Manish Raghavan · Solon Barocas


To what degree are common interventions to improve the fairness of lending decisions based on machine learning models robust to modeling choices about the nature of lending? In this paper, we focus on the following modeling choices: 1) whether consumer and lender welfare is naturally aligned, 2) whether consumer interests are uniform, 3) whether loan decisions are binary (lend/don't lend) or continuous (varied loan terms), and 4) whether the cost of interventions are shouldered by lenders or passed along to consumers. For a variety of common interventions, we find that varying these modeling choices can lead to very different conclusions about how interventions impact consumer welfare and whether interventions actually help the consumers they intend to help. We discuss three such interventions: the use of alternative data, quantitative fairness constraints, and counterfactual explanations. We show that interventions that would seem likely to advance consumer welfare under certain modeling choices could end up undermining consumer welfare under reasonable alternative choices.

Chat is not available.