Skip to yearly menu bar Skip to main content

Workshop: AI meets Moral Philosophy and Moral Psychology: An Interdisciplinary Dialogue about Computational Ethics

#32: Foundational Moral Values for AI Alignment

Betty Hou · Brian Green

Keywords: [ ethics ] [ moral values ] [ artificial intelligence ] [ morality ] [ alignment ] [ moral philosophy ]

[ ] [ Project Page ]
Fri 15 Dec 12:50 p.m. PST — 1:50 p.m. PST


Solving the AI alignment problem requires having a defensible set of clear values towards which AI systems can align. Currently, targets for alignment remain underspecified and are not philosophically robust. In this paper, we argue for the inclusion of five core, foundational values, drawn from moral philosophy and built on the requisites for human existence: survival, sustainable intergenerational existence, society, education, and truth. These values not only provide a clearer direction for technical alignment work, but they also suggest threats and opportunities from AI systems to both obtain and sustain these values.

Chat is not available.