Skip to yearly menu bar Skip to main content


Poster
in
Workshop: AI meets Moral Philosophy and Moral Psychology: An Interdisciplinary Dialogue about Computational Ethics

#11: Does Explainable AI Have Moral Value?

Joshua Brand · Luca Nannini

Keywords: [ Explainable AI ] [ agency ] [ Moral duties ] [ Interdisciplinary collaboration ] [ Reciprocity ]

[ ] [ Project Page ]
Fri 15 Dec 7:50 a.m. PST — 8:50 a.m. PST

Abstract:

Explainable AI (XAI) aims to bridge the gap between complex algorithmic systems and human stakeholders. Current discourse often examines XAI in isolation as either a technological tool, user interface, or policy mechanism. This paper proposes a unifying ethical framework grounded in moral duties and the concept of reciprocity. We argue that XAI should be appreciated not merely as a right, but as part of our moral duties that helps sustain a reciprocal relationship between humans affected by AI systems. This is because, we argue, explanations help sustain constitutive symmetry and agency in AI-led decision-making processes. We then assess leading XAI communities and reveal gaps between the ideal of reciprocity and practical feasibility. Machine learning offers useful techniques but overlooks evaluation and adoption challenges. Human-computer interaction provides preliminary insights but oversimplifies organizational contexts. Policies espouse accountability but lack technical nuance. Synthesizing these views exposes barriers to implementable, ethical XAI. Still, positioning XAI as a moral duty transcends rights-based discourse to capture a more robust and complete moral picture. This paper provides an accessible, detailed analysis elucidating the moral value of explainability.

Chat is not available.