Timezone: »

 
Poster
Evaluating Attribution for Graph Neural Networks
Benjamin Sanchez-Lengeling · Jennifer Wei · Brian Lee · Emily Reif · Peter Wang · Wesley Qian · Kevin McCloskey · Lucy Colwell · Alexander Wiltschko

Wed Dec 09 09:00 AM -- 11:00 AM (PST) @ Poster Session 3 #840

Interpretability of machine learning models is critical to scientific understanding, AI safety, as well as debugging. Attribution is one approach to interpretability, which highlights input dimensions that are influential to a neural network’s prediction. Evaluation of these methods is largely qualitative for image and text models, because acquiring ground truth attributions requires expensive and unreliable human judgment. Attribution has been little studied for graph neural networks (GNNs), a model class of growing importance that makes predictions on arbitrarily-sized graphs. In this work we adapt commonly-used attribution methods for GNNs and quantitatively evaluate them using computable ground-truths that are objective and challenging to learn. We make concrete recommendations for which attribution methods to use, and provide the data and code for our benchmarking suite. Rigorous and open source benchmarking of attribution methods in graphs could enable new methods development and broader use of attribution in real-world ML tasks.

Author Information

Benjamin Sanchez-Lengeling (Google Research)
Jennifer Wei (Google Research)
Brian Lee (Google Inc.)
Emily Reif (Google)
Peter Wang (Columbia University)
Wesley Qian (University of Illinois at Urbana-Champaign)
Kevin McCloskey (Google)
Lucy Colwell (Google)
Alexander Wiltschko (Google Brain)

More from the Same Authors