Skip to yearly menu bar Skip to main content


Poster
in
Workshop: XAI in Action: Past, Present, and Future Applications

Optimising Human-AI Collaboration by Learning Convincing Explanations

Alex Chan · Alihan Hüyük · Mihaela van der Schaar

[ ] [ Project Page ]
Sat 16 Dec 12:01 p.m. PST — 1 p.m. PST

Abstract:

Machine learning models are being increasingly deployed to take, or assist in taking, complicated and high-impact decisions, from quasi-autonomous vehicles to clinical decision support systems. This poses challenges, particularly when models have hard-to-detect failure modes and are able to take actions without oversight. In order to handle this challenge, we propose a method for a collaborative system that remains safe by having a human ultimately making decisions, while giving the model the best opportunity to convince and debate them with interpretable explanations. However, the most helpful explanation varies among individuals and may be inconsistent across stated preferences. To this end we develop an algorithm, Ardent, to efficiently learn a ranking through interaction and best assist humans complete a task. By utilising a collaborative approach, we can ensure safety and improve performance while addressing transparency and accountability concerns. Ardent enables efficient and effective decision-making by adapting to individual preferences for explanations, which we validate through extensive simulations alongside a user study involving a challenging image classification task, demonstrating consistent improvement over competing systems.

Chat is not available.