Skip to yearly menu bar Skip to main content


Workshop

AI meets Moral Philosophy and Moral Psychology: An Interdisciplinary Dialogue about Computational Ethics

Sydney Levine · Liwei Jiang · Jared Moore · Zhijing Jin · Yejin Choi

Room 255 - 257

Fri 15 Dec, 6:45 a.m. PST

Be it in advice from a chatbot, suggestions on how to administer resources, or which content to highlight, AI systems increasingly make value-laden decisions. However, researchers are becoming increasingly concerned about whether AI systems are making the right decisions. These emerging issues in the AI community have been long-standing topics of study in the fields of moral philosophy and moral psychology. Philosophers and psychologists have for decades (if not centuries) been interested in the systematic description and evaluation of human morality and the sub-problems that come up when attempting to describe and prescribe answers to moral questions. For instance, philosophers and psychologists have long debated the merits of utility-based versus rule-based theories of morality, their various merits and pitfalls, and the practical challenges of implementing them in resource-limited systems. They have pondered what to do in cases of moral uncertainty, attempted to enumerate all morally relevant concepts, and argued about what counts as a moral issue at all.In some isolated cases, AI researchers have slowly started to adopt the theories, concepts, and tools developed by moral philosophers and moral psychologists. For instance, we use the "trolley problem" as a tool, adopt philosophical moral frameworks to tackle contemporary AI problems, and have begun developing benchmarks that draw on psychological experiments probing moral judgment and development.Despite this, interdisciplinary dialogue remains limited. Each field uses specialized language, making it difficult for AI researchers to adopt the theoretical and methodological frameworks developed by philosophers and psychologists. Moreover, many theories in philosophy and psychology are developed at a high level of abstraction and are not computationally precise. In order to overcome these barriers, we need interdisciplinary dialog and collaboration. This workshop will create a venue to facilitate these interactions by bringing together psychologists, philosophers, and AI researchers working on morality. We hope that the workshop will be a jumping-off point for long-lasting collaborations among the attendees and will break down barriers that currently divide the disciplines. The central theme of the workshop will be the application of moral philosophy and moral psychology theories to AI practices. Our invited speakers are some of the leaders in the emerging efforts to draw on theories in philosophy or psychology to develop ethical AI systems. Their talks will demonstrate cutting-edge efforts to do this cross-disciplinary work, while also highlighting their own shortcomings (and those of the field more broadly). Each talk will receive a 5-minute commentary from a junior scholar in a field that is different from that of the speaker. We hope these talks and commentaries will inspire conversations among the rest of the attendees.

Chat is not available.
Timezone: America/Los_Angeles

Schedule