On Evaluating Explanation Utility for Human-AI Decision-Making in NLP
Fateme Hashemi Chaleshtori · Atreya Ghosal · Ana Marasovic
2023 Spotlight
in
Workshop: XAI in Action: Past, Present, and Future Applications
in
Workshop: XAI in Action: Past, Present, and Future Applications
Abstract
Is explainability a false promise? This debate has emerged from the lack of consistent evidence that explanations help in situations they are introduced for. In NLP, the evidence is not only inconsistent but also scarce. While there is a clear need for more human-centered, application-grounded evaluations, it is less clear where NLP researchers should begin if they want to conduct them. To address this, we introduce evaluation guidelines established through an extensive review and meta-analysis of related work.
Video
Chat is not available.
Successful Page Load