Skip to yearly menu bar Skip to main content


Oral Poster

MetaLA: Unified Optimal Linear Approximation to Softmax Attention Map

YUHONG CHOU · Man Yao · Kexin Wang · Yuqi Pan · Rui-Jie Zhu · Jibin Wu · Yiran Zhong · Yu Qiao · Bo Xu · Guoqi Li

[ ]
Fri 13 Dec 4:30 p.m. PST — 7:30 p.m. PST
 
Oral presentation: Oral Session 6D
Fri 13 Dec 3:30 p.m. PST — 4:30 p.m. PST

Abstract:

Various linear complexity models, such as Linear Transformer (LinFormer), State Space Model (SSM), and Linear RNN (LinRNN), have been proposed to replace the conventional softmax attention in Transformer structures. However, the optimal design of these linear models is still an open question. In this work, we attempt to answer this question by finding the best linear approximation to softmax attention from a theoretical perspective. We start by unifying existing linear complexity models as the linear attention form and then identify three conditions for the optimal linear attention design: (1) Dynamic memory ability; (2) Static approximation ability; (3) Least parameter approximation. We find that none of the current linear models meet all three conditions, resulting in suboptimal performance. Instead, we propose Meta Linear Attention (MetaLA) as a solution that satisfies these conditions. Our experiments on Multi-Query Associative Recall (MQAR) task, language modeling, image classification, and Long-Range Arena (LRA) benchmark demonstrate that MetaLA is more effective than the existing linear models.

Live content is unavailable. Log in and register to view live content