Skip to yearly menu bar Skip to main content


Poster
in
Workshop: XAI in Action: Past, Present, and Future Applications

Caution to the Exemplars: On the Intriguing Effects of Example Choice on Human Trust in XAI

Tobias Leemann · Yao Rong · Thai-Trang Nguyen · Dr. Enkelejda Kasneci · Gjergji Kasneci


Abstract:

In model audits explainable AI (XAI) systems are usually presented to human auditors on a limited number of examples due to time constraints. However, recent literature has suggested that in order to establish trust in ML models, it is not only the model’s overall performance that matters but also the specific examples on which it is correct. In this work, we study this hypothesis through a controlled user study with N = 320 participants. On a tabular and an image dataset, we show model explanations to users on examples that are categorized as ambiguous or unambiguous. For ambiguous examples, there is disagreement on the correct label among human raters whereas for unambiguous examples human labelers agree. We find that ambiguity can have a substantial effect on human trust, which is however influenced by surprising interactions of the data modality and explanation quality. While unambiguous examples boost trust for explanations that remain plausible, they also help auditors identify highly implausible explanations, thereby decreasing trust. Our results suggest paying closer attention to the selected examples in the presentation of XAI techniques.

Chat is not available.