Exploiting All Laplacian Eigenvectors for Node Classification with Graph Transformers
Abstract
Graph transformers have emerged as powerful tools for modeling complex graph-structured data, offering the ability to capture long-range dependencies. They require explicit positional encodings to inject structural information, which are most commonly derived from the eigenvectors of the graph Laplacian. However, existing approaches utilize only a small set of low-frequency eigenvectors, assuming that smooth global structure is sufficiently informative. We show that, for the task of node classification, it is possible to exploit a much broader spectrum of eigenvectors and achieve significant gains, especially in heterophilic graphs. Additionally, we introduce a first-principles approach for ranking and selecting eigenvectors based on their importance for node classification. Our method is plug-and-play and delivers substantial improvements across diverse benchmarks, elevating even vanilla graph transformers to match or surpass state-of-the-art models.