Skip to yearly menu bar Skip to main content

Workshop: AI meets Moral Philosophy and Moral Psychology: An Interdisciplinary Dialogue about Computational Ethics

#54: Resource-rational moral judgment

Sarah Wu · Xiang Ren · Sydney Levine

Keywords: [ moral psychology ] [ moral judgment ] [ resource rationality ] [ computational ethics ]

[ ] [ Project Page ]
Fri 15 Dec 12:50 p.m. PST — 1:50 p.m. PST


It is widely agreed upon that the mind has a series of different mechanisms it can use to make moral judgments. But how does it decide which one to use when? Recent theoretical work has suggested that people select mechanisms of moral judgment in a way that is resource-rational --- that is, by rationally trading off effort against accuracy. For instance, people may follow general rules in low-stakes situations, but engage more costly mechanisms (such as consequentialist or contractualist reasoning) when the stakes are high. Despite the theoretical appeal of this proposal, this hypothesis makes empirical predictions that have not yet been tested directly. Here, we evaluate whether humans and large language models (LLMs) exhibit resource-rational moral reasoning in a case study of medical triage, where we manipulated the complexity (number of patients in line) and stakes (severity of symptoms) of the scenario. As predicted, we found that the higher the stakes and/or the lower the complexity, the more people elected to and endorsed using a more effortful mechanism over following a general rule. However, there was mixed evidence for similar resource-rational moral reasoning in the LLMs. Our results provide the first direct evidence that people's moral judgments reflect resource-rational cognitive constraints, and they highlight the opportunities for developing AI systems better aligned with human moral values.

Chat is not available.