Skip to yearly menu bar Skip to main content

Workshop: New Frontiers in Graph Learning (GLFrontiers)

GNN Predictions on k-hop Egonets Boosts Adversarial Robustness

Jian Vora

Keywords: [ k-hop subgraphs ] [ Adversarial Robustness ]


Like many other deep learning models, Graph Neural Networks (GNNs) havebeen shown to be susceptible to adversarial attacks, i.e., the addition of craftedimperceptible noise to input data changes the model predictions drastically. Wepropose a very simple method k-HOP-PURIFY which makes node predictions on ak-hop Egonet centered at the node instead of the entire graph boosts adversarial accuracies. This could be used both as i) a post-processing step after applying populardefenses or ii) as a standalone defense method which is comparable to many othercompetitors. The method is extremely lightweight and scalable (takes 4 lines ofcode to implement) unlike many other defense methods which are computationallyexpensive or rely on heuristics. We show performance gains through extensive experimentation across various types of attacks (poison/evasion, targetted/untargeted),perturbation rates, and defenses implemented in the DeepRobust Library.

Chat is not available.