Timezone: »

SOFT: Softmax-free Transformer with Linear Complexity
Jiachen Lu · Jinghan Yao · Junge Zhang · Xiatian Zhu · Hang Xu · Weiguo Gao · Chunjing XU · Tao Xiang · Li Zhang


Vision transformers (ViTs) have pushed the state-of-the-art for various visual recognition tasks by patch-wise image tokenization followed by self-attention. However, the employment of self-attention modules results in a quadratic complexity in both computation and memory usage. Various attempts on approximating the self-attention computation with linear complexity have been made in Natural Language Processing. However, an in-depth analysis in this work shows that they are either theoretically flawed or empirically ineffective for visual recognition. We further identify that their limitations are rooted in keeping the softmax self-attention during approximations. Specifically, conventional self-attention is computed by normalizing the scaled dot-product between token feature vectors. Keeping this softmax operation challenges any subsequent linearization efforts. Based on this insight, for the first time, a softmax-free transformer or SOFT is proposed. To remove softmax in self-attention, Gaussian kernel function is used to replace the dot-product similarity without further normalization. This enables a full self-attention matrix to be approximated via a low-rank matrix decomposition. The robustness of the approximation is achieved by calculating its Moore-Penrose inverse using a Newton-Raphson method. Extensive experiments on ImageNet show that our SOFT significantly improves the computational efficiency of existing ViT variants. Crucially, with a linear complexity, much longer token sequences are permitted in SOFT, resulting in superior trade-off between accuracy and complexity.

Author Information

Jiachen Lu (Fudan University)
Jinghan Yao
Junge Zhang (Fudan University)
Xiatian Zhu (Samsung AI Centre, Cambridge)
Hang Xu (Huawei Noah's Ark Lab)
Weiguo Gao (Fudan University)
Chunjing XU (Huawei Technologies)
Tao Xiang (Samsung AI Centre, Cambridge)
Li Zhang (University of Oxford, Queen Mary University of London)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors