Skip to yearly menu bar Skip to main content


Poster
in
Workshop: XAI in Action: Past, Present, and Future Applications

AttributionLab: Faithfulness of Feature Attribution Under Controllable Environments

Yang Zhang · Yawei Li · Hannah Brown · Mina Rezaei · Bernd Bischl · Philip Torr · Ashkan Khakzar · Kenji Kawaguchi

[ ] [ Project Page ]
Sat 16 Dec 12:01 p.m. PST — 1 p.m. PST

Abstract:

Feature attribution explains neural network outputs by identifying relevant input features.How do we know if the identified features are indeed relevant to the network? This notion is referred to as faithfulness, an essential property that reflects the alignment between the identified (attributed) features and the features used by the model.One recent trend to test faithfulness is to design the data such that we know which input features are relevant to the label and then train a model on the designed data.Subsequently, the identified features are evaluated by comparing them with these designed ground truth features.However, this idea has the underlying assumption that the neural network learns to use all and only these designed features, while there is no guarantee that the learning process trains the network in this way.In this paper, we solve this missing link by explicitly designing the neural network by manually setting its weights, along with designing data, so we know precisely which input features in the dataset are relevant to the designed network. Thus, we can test faithfulness in AttributionLab, our designed synthetic environment, which serves as a sanity check and is effective in filtering out attribution methods. If an attribution method is not faithful in a simple controlled environment, it can be unreliable in more complex scenarios. Furthermore, the AttributionLab environment serves as a laboratory for controlled experiments through which we can study feature attribution methods, identify issues, and suggest potential improvements.

Chat is not available.