Skip to yearly menu bar Skip to main content


Poster
in
Workshop: HCAI@NeurIPS 2022, Human Centered AI

Trust Explanations to Do What They Say

Neil Natarajan · Reuben Binns · Jun Zhao · Nigel Shadbolt

Keywords: [ Trustworthiness ] [ Trust ] [ explainability ] [ interpretability ]


Abstract:

How much are we to trust a decision made by an AI algorithm? Trusting an algorithm without cause may lead to abuse, and mistrusting it may similarly lead to disuse. Trust in an AI is only desirable if it is warranted; thus, calibrating trust is critical to ensuring appropriate use. In the name of calibrating trust appropriately, AI developers should provide contracts specifying use cases in which an algorithm can and cannot be trusted. Automated explanation of AI outputs is often touted as a method by which trust can be built in the algorithm. However, automated explanations arise from algorithms themselves, so trust in these explanations is similarly only desirable if it is warranted. Developers of algorithms explaining AI outputs (xAI algorithms) should provide similar contracts, which should specify use cases in which an explanation can and cannot be trusted.

Chat is not available.