Transformers as Unrolled Inference in Probabilistic Laplacian Eigenmaps
Aditya Ravuri · Neil Lawrence
Abstract
We propose a probabilistic interpretation of transformers as unrolled inference steps assuming a probabilistic Laplacian Eigenmaps model from the ProbDR framework. Our derivation shows that at initialisation, transformers perform ``linear'' dimensionality reduction.We also show that within the transformer block, a graph Laplacian term arises from our arguments, rather than an attention matrix (which we interpret as an adjacency matrix).We demonstrate that simply subtracting the identity from the attention matrix (and thereby taking a graph diffusion step) improves validation performance on a language model and a simple vision transformer.
Chat is not available.
Successful Page Load