Workshop: Human in the Loop Learning (HiLL) Workshop at NeurIPS 2022

Human Interventions in Concept Graph Networks

Lucie Charlotte Magister · Pietro Barbiero · Dmitry Kazhdan · Federico Siciliano · Gabriele Ciravegna · Fabrizio Silvestri · Mateja Jamnik · Pietro LiĆ³


Deploying Graph Neural Networks requires trustworthy models whose interpretable structure and reasoning can support effective human interactions and model checking. Existing explainers fail to address this issue by providing post-hoc explanations which do not allow human interaction making the model itself more interpretable. To fill this gap, we introduce the Concept Distillation Module, the first differentiable concept-distillation approach for graph networks. The proposed approach is a layer that can be plugged into any graph network to make it explainable by design, by first distilling graph concepts from the latent space and then using these to solve the task. Our results demonstrate that this approach allows graph networks to: (i) support effective human interventions at test time: these can increase human trust as well as significantly improve model performance, (ii) provide high-quality concept-based logic explanations for their prediction, and (iii) attain model accuracy comparable with their equivalent vanilla versions.

Chat is not available.