Skip to yearly menu bar Skip to main content


Poster
in
Workshop: AI meets Moral Philosophy and Moral Psychology: An Interdisciplinary Dialogue about Computational Ethics

#06: Value Malleability and its implication for AI alignment

Nora Ammann

Keywords: [ performative power ] [ AI Ethics ] [ AI alignment ] [ legitimacy ]

[ ] [ Project Page ]
Fri 15 Dec 7:50 a.m. PST — 8:50 a.m. PST

Abstract:

I argue that (1) a realistic understanding of the nature of values takes them to be malleable, rather than fixed; (2) there are legitimate as well as illegitimate cases of value change; and that (3) AI systems have an (increasing) capacity to affect people’s value-change trajectories. Given that, approaches to align AI must take seriously the implications of value malleability and address the problem of (il)legitimate value change; that is the problem of making sure AI systems neither cause value change illegitimately, nor forestall legitimate cases of value change in humans and society. To further elucidate the relevance of this problem, I discuss the risks that arise from failing to account for the malleability of human values, ways these risks manifest already today and are likely to be exacerbated as AI systems become more advanced and more widely deployed.

Chat is not available.