Timezone: »

Adversarial Attacks on Graph Classifiers via Bayesian Optimisation
Xingchen Wan · Henry Kenlay · Robin Ru · Arno Blaas · Michael A Osborne · Xiaowen Dong

Tue Dec 07 08:30 AM -- 10:00 AM (PST) @ None #None

Graph neural networks, a popular class of models effective in a wide range of graph-based learning tasks, have been shown to be vulnerable to adversarial attacks. While the majority of the literature focuses on such vulnerability in node-level classification tasks, little effort has been dedicated to analysing adversarial attacks on graph-level classification, an important problem with numerous real-life applications such as biochemistry and social network analysis. The few existing methods often require unrealistic setups, such as access to internal information of the victim models, or an impractically-large number of queries. We present a novel Bayesian optimisation-based attack method for graph classification models. Our method is black-box, query-efficient and parsimonious with respect to the perturbation applied. We empirically validate the effectiveness and flexibility of the proposed method on a wide range of graph classification tasks involving varying graph properties, constraints and modes of attack. Finally, we analyse common interpretable patterns behind the adversarial samples produced, which may shed further light on the adversarial robustness of graph classification models.

Author Information

Xingchen Wan (University of Oxford)
Henry Kenlay (University of Oxford)
Robin Ru (Oxford University)
Arno Blaas (University of Oxford)
Michael A Osborne (U Oxford)
Xiaowen Dong (University of Oxford)

More from the Same Authors