Skip to yearly menu bar Skip to main content


Oral
in
Workshop: eXplainable AI approaches for debugging and diagnosis

[O5] Do Feature Attribution Methods Correctly Attribute Features?

Yilun Zhou · Serena Booth · Marco Tulio Ribeiro · Julie A Shah


Abstract:

Feature attribution methods are exceedingly popular in interpretable machine learning. They aim to compute the attribution of each input feature to represent its importance, but there is no consensus on the definition of "attribution", leading to many competing methods with little systematic evaluation. The lack of ground truth for feature attribution particularly complicates evaluation; to address this, we propose a dataset modification procedure where we construct attribution ground truth. Using this procedure, we evaluate three common interpretability methods: saliency maps, rationales, and attention. We identify several deficiencies and add new perspectives to the growing body of evidence questioning the correctness and reliability of these methods in the wild. Our evaluation approach is model-agnostic and can be used to assess future feature attribution method proposals as well.