Skip to yearly menu bar Skip to main content

Workshop: AI meets Moral Philosophy and Moral Psychology: An Interdisciplinary Dialogue about Computational Ethics

#49: Anticipating the risks and benefits of counterfactual world simulation models

Lara Kirfel · Rob MacCoun · Thomas Icard · Tobias Gerstenberg

Keywords: [ Responsible AI ] [ ai in law ] [ Simulation ] [ counterfactual simulation ] [ Generative AI ] [ AI Ethics ]

[ ] [ Project Page ]
Fri 15 Dec 12:50 p.m. PST — 1:50 p.m. PST


This paper examines the transformative potential of Counterfactual World Simulation Models (CWSMs). A CWSM uses multi-modal evidence, such as the CCTV footage of a road accident, to build a high-fidelity 3D reconstruction of what happened. It can answer causal questions, such as whether the accident happened because the driver was speeding, by simulating what would have happened in relevant counterfactual situations. We argue for a normative and ethical framework that guides and constrains the simulation of counterfactuals. We address the challenge of ensuring fidelity in reconstructions while simultaneously preventing stereotype perpetuation during counterfactual simulations. We anticipate different modes of how users will interact with CWSMs and discuss how their outputs may be presented. Finally, we address the prospective applications of CWSMs in the legal domain, recognizing both their potential to revolutionize legal proceedings as well as the ethical concerns they engender. Sketching a new genre of AI, this paper seeks to illuminate the path forward for responsible and effective use of CWSMs.

Chat is not available.