Skip to yearly menu bar Skip to main content

Workshop: New Frontiers in Graph Learning (GLFrontiers)

SATG : Structure Aware Transformers on Graphs for Node Classification

Sumedh B G · Sanjay Patnala · Himil Vasava · Akshay Sethi · Sonia Gupta

Keywords: [ Scalability ] [ graph transformers ] [ Node Classification ] [ Graph Data ] [ transformers ]


Transformers have achieved state-of-the-art performance in the fields of Computer Vision (CV) and Natural Language Processing (NLP). Inspired by this, architectures have come up in recent times that incorporate transformers into the domain of graph neural networks. Most of the existing Graph Transformers either take a set of all the nodes as an input sequence leading to quadratic time complexity or they take only one hop or k-hop neighbours as the input sequence, thereby completely ignoring any long-range interactions. To this end, we propose Structure Aware Transformer on Graphs (SATG), where we capture both short-range and long-range interactions in a computationally efficient manner. When it comes to dealing with non-euclidean spaces like graphs, positional encoding becomes an integral component to provide structural knowledge to the transformer. Upon observing the shortcomings of the existing set of positional encodings, we introduce a new class of positional encodings trained on a Neighbourhood Contrastive Loss that effectively captures the entire topology of the graph. We also introduce a method to effectively capture long-range interactions without having a quadratic time complexity. Extensive experiments done on five benchmark datasets show that SATG consistently outperforms GNNs by a substantial margin and also successfully outperforms other Graph Transformers.

Chat is not available.