Timezone: »

Are Defenses for Graph Neural Networks Robust?
Felix Mujkanovic · Simon Geisler · Stephan Günnemann · Aleksandar Bojchevski

Thu Dec 01 02:00 PM -- 04:00 PM (PST) @ Hall J #906

A cursory reading of the literature suggests that we have made a lot of progress in designing effective adversarial defenses for Graph Neural Networks (GNNs). Yet, the standard methodology has a serious flaw – virtually all of the defenses are evaluated against non-adaptive attacks leading to overly optimistic robustness estimates. We perform a thorough robustness analysis of 7 of the most popular defenses spanning the entire spectrum of strategies, i.e., aimed at improving the graph, the architecture, or the training. The results are sobering – most defenses show no or only marginal improvement compared to an undefended baseline. We advocate using custom adaptive attacks as a gold standard and we outline the lessons we learned from successfully designing such attacks. Moreover, our diverse collection of perturbed graphs forms a (black-box) unit test offering a first glance at a model's robustness.

Author Information

Felix Mujkanovic (Technical University of Munich)
Simon Geisler (Technical University of Munich)
Stephan Günnemann (Technical University of Munich)
Aleksandar Bojchevski (CISPA Helmholtz Center for Information Security)

More from the Same Authors