Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Human and Machine Decisions

Nearest-neighbor is more useful than feature attribution in improving human accuracy on image classification

Giang Nguyen · Anh Nguyen


Abstract:

Recent advances in eXplainable Artificial Intelligence have enabled Artificial Intelligence (AI) systems to describe their thought process to human users. Also, given the high performance of AI on i.i.d, test sets, it is interesting to study whether such AIs can work alongside humans and improve the accuracy of user decisions. We conduct a user study on 320 lay and 11 expert users to understand on the effectiveness of state-of-the-art attribution methods in assisting humans in ImageNet classification, Stanford Dogs fine-grained classification, and these two tasks but when the input image contains adversarial perturbations. We found that, overall, feature attribution is surprisingly not more effective than showing humans nearest training-set examples. On a hard task of fine-grained dog classification, presenting attribution maps to humans does not help, but instead hurts the performance of human-AI teams compared to AI alone. Our findings encourage the community to rigorously test their methods on downstream human-in-the-loop applications and to rethink the existing evaluation metrics.

Chat is not available.