Poster

Labeling Neural Representations with Inverse Recognition

Kirill Bykov · Laura Kopf · Shinichi Nakajima · Marius Kloft · Marina Höhne

Great Hall & Hall B1+B2 (level 1) #1519
[ ] [ Project Page ]
Thu 14 Dec 8:45 a.m. PST — 10:45 a.m. PST

Abstract:

Deep Neural Networks (DNNs) demonstrated remarkable capabilities in learning complex hierarchical data representations, but the nature of these representations remains largely unknown. Existing global explainability methods, such as Network Dissection, face limitations such as reliance on segmentation masks, lack of statistical significance testing, and high computational demands. We propose Inverse Recognition (INVERT), a scalable approach for linking the learned representations to human-interpretable concepts based on the ability to differentiate between concepts. In contrast to prior work, INVERT is capable of handling diverse types of neurons, exhibits less computational complexity, and does not rely on the availability of segmentation masks. Moreover, INVERT provides an interpretable metric assessing the alignment between the representation and its corresponding explanation and delivering a measure of statistical significance, emphasizing its utility and credibility.We demonstrate the applicability of INVERT in various scenarios, including the identification of representations affected by spurious correlations, and the interpretation of the hierarchical structure of decision-making within the models.

Chat is not available.