Skip to yearly menu bar Skip to main content


Poster
in
Workshop: AI meets Moral Philosophy and Moral Psychology: An Interdisciplinary Dialogue about Computational Ethics

#26: False Consensus Biases AI Against Vulnerable Stakeholders

Mengchen Dong

Keywords: [ ai governance ] [ tradeoff ] [ welfare ] [ AI Ethics ] [ morality ]

[ ] [ Project Page ]
Fri 15 Dec 7:50 a.m. PST — 8:50 a.m. PST

Abstract:

The use of Artificial Intelligence (AI) is becoming commonplace in government operations, but creates trade-offs which can impact vulnerable stakeholders. In particular, the deployment of AI systems for welfare benefit allocation allows for accelerated decision-making and faster provision of critical help, but has already led to an increase in unfair benefit denials and false fraud accusations. Collecting data in the US and the UK (N = 2449), we explore the acceptability of such speed-accuracy trade-offs in populations of claimants and non-claimants. We observe a general willingness to trade off speed gains for modest accuracy losses, but this aggregate view masks divergences between the preferences of vulnerable and less vulnerable stakeholders. Furthermore, we show that while claimants can provide unbiased estimates of the preferences of non-claimants, non-claimants have no insights in the preferences of claimants, even in the presence of financial incentives. Altogether, these findings demonstrate the need for careful stakeholder engagement when designing and deploying AI systems, particularly in contexts marked by power imbalance. In the absence of such engagement, policy decisions about AI systems can be driven by a false consensus influenced by the voice of a dominant group whose members, however well-intentioned, ignore the actual preferences of those directly affected by the system.

Chat is not available.