Skip to yearly menu bar Skip to main content


Poster
in
Workshop: XAI in Action: Past, Present, and Future Applications

ObEy Anything: Quantifiable Object-based Explainability without Ground Truth Annotations

William Ho · Lennart Schulze · Richard Zemel


Abstract:

With neural networks quickly being adopted in throughout society, understanding their behavior is becoming more important than ever. However, today's explainable AI field mostly consist of methods to explain single decisions of a model, which do not give us insight into the model as a whole, rendering the notion of explainability ambiguous. To this end, we contribute to the discussion of the distinction between explanation methods and explainability, and introduce Object-based Explainability (ObEy), a novel metric to quantify the explainability of models. ObEy is grounded in the natural in natural sciences and scores saliency maps based on visual perception of objects using segmentation masks. However, as such masks are not readily available in practical settings, we propose to use a new foundational model to generate segmentation masks, making our metric applicable in any setting. We demonstrate ObEy's immediate applicability to practical use cases, and present new insights into the explainability of adversarially trained models from a quantitative perspective.

Chat is not available.