Timezone: »
As one of the most popular machine learning models today, graph neural networks (GNNs) have attracted intense interest recently, and so does their explainability. Users are increasingly interested in a better understanding of GNN models and their outcomes. Unfortunately, today's evaluation frameworks for GNN explainability often rely on few inadequate synthetic datasets, leading to conclusions of limited scope due to a lack of complexity in the problem instances. As GNN models are deployed to more mission-critical applications, we are in dire need for a common evaluation protocol of explainability methods of GNNs. In this paper, we propose, to our best knowledge, the first systematic evaluation framework for GNN explainability, considering explainability on three different "user needs". We propose a unique metric that combines the fidelity measures and classifies explanations based on their quality of being sufficient or necessary. We scope ourselves to node classification tasks and compare the most representative techniques in the field of input-level explainability for GNNs. For the inadequate but widely used synthetic benchmarks, surprisingly shallow techniques such as personalized PageRank have the best performance for a minimum computation time. But when the graph structure is more complex and nodes have meaningful features, gradient-based methods are the best according to our evaluation criteria. However, none dominates the others on all evaluation dimensions and there is always a trade-off. We further apply our evaluation protocol in a case study for frauds explanation on eBay transaction graphs to reflect the production environment.
Author Information
Kenza Amara (ETH Zurich)

I am an ETH AI Center Doctoral Fellow. My research interest is on Explainability for Graph Neural Networks, and its use for health, atmospheric and climate models, banks, and the e-commerce. I am particularly interested in the fundamental ideas of deep learning on graphs, neural network interpretability, network analysis as well as graph theory and reinforcement learning applications in environmental sciences and social networks. I am part of the DS3Lab (Institute for Computing Platforms - Systems Group) led by Prof. Ce Zhang, the Social Networks Lab led by Prof. Ulrik Brandes and the IAC (Institute for Atmospheric and Climate Science) led by Prof. Sebastian Schemm. Prior to my PhD, I studied at École polytechnique, majoring in computer science and mathematics. I also hold a Master’s degree from ETH Zürich in environmental sciences and policy.
Rex Ying (Yale University)
Zitao Zhang (eBay)
Zhihao Han
Yinan Shan
Ulrik Brandes (ETH Zürich)
Sebastian Schemm (ETHZ - ETH Zurich)
More from the Same Authors
-
2022 : GraphFramEx: Towards Systematic Evaluation of Explainability Methods for Graph Neural Networks »
Kenza Amara · Rex Ying · Ce Zhang -
2022 : Learning Efficient Hybrid Particle-continuum Representations of Non-equilibrium N-body Systems »
Tailin Wu · Michael Sun · Hsuan-Gu Chou · Pranay Reddy Samala · Sithipont Cholsaipant · Sophia Kivelson · Jacqueline Yau · Rex Ying · E. Paulo Alves · Jure Leskovec · Frederico Fiuza -
2022 : How Powerful is Implicit Denoising in Graph Neural Networks »
Songtao Liu · Rex Ying · Hanze Dong · Lu Lin · Jinghui Chen · Dinghao Wu -
2022 : Efficient Automatic Machine Learning via Design Graphs »
Shirley Wu · Jiaxuan You · Jure Leskovec · Rex Ying -
2022 Workshop: New Frontiers in Graph Learning »
Jiaxuan You · Marinka Zitnik · Rex Ying · Yizhou Sun · Hanjun Dai · Stefanie Jegelka -
2022 : Invited Talk »
Rex Ying