Skip to yearly menu bar Skip to main content


Poster
in
Workshop: XAI in Action: Past, Present, and Future Applications

GLANCE: Global to Local Architecture-Neutral Concept-based Explanations

Avinash Kori · Ben Glocker · Francesca Toni


Abstract:

Most of the current explainability techniques focus on capturing the importance of features in input space. However, given the complexity of models and data-generating processes, the resulting explanations are far from being complete, in that they lack an indication of feature interactions and visualization of their effect. In this work, we propose a novel surrogate-model-based explainability framework to explain the decisions of any CNN-based image classifiers by extracting causal relations between the features. These causal relations serve as global explanations from which local explanations of different forms can be obtained. Specifically, we employ a generator to visualize the `effect' of interactions among features in latent space and draw feature importance therefrom as local explanations. We demonstrate and evaluate explanations obtained with our framework on the Morpho-MNIST, the FFHQ, and the AFHQ datasets.

Chat is not available.