Skip to yearly menu bar Skip to main content


Poster
in
Workshop: XAI in Action: Past, Present, and Future Applications

On the Consistency of GNN Explainability Methods

Ehsan Hajiramezanali · Sepideh Maleki · Alex Tseng · Aicha BenTaieb · Gabriele Scalia · Tommaso Biancalani


Abstract:

Despite the widespread utilization of post-hoc explanation methods for graph neural networks (GNNs) in high-stakes settings, there has been a lack of comprehensive evaluation regarding their quality and reliability. This evaluation is challenging primarily due to the data's non-Euclidean nature, arbitrary size, and complex topological structure. In this context, we argue that the consistency of GNN explanations, denoting the ability to produce similar explanations for input graphs with minor structural changes that do not alter their output predictions, is a key requirement for effective post-hoc GNN explanations. To fulfill this gap, we introduce a novel metric based on Fused Gromov--Wasserstein distance to quantify consistency. Finally, we demonstrate that current methods do not perform well according to this metric, underscoring the need for further research on reliable GNN explainability methods.

Chat is not available.