Skip to yearly menu bar Skip to main content


Poster

SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention

Róbert Csordás · Piotr Piękos · Kazuki Irie · Jürgen Schmidhuber

[ ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Despite many recent works on Mixture of Experts (MoEs) for resource-efficient Transformer language models, existing methods mostly focus on MoEs for feedforward layers. Previous attempts at extending MoE to the self-attention layer fail to match the performance of the parameter-matched baseline. Our novel SwitchHead is an effective MoE method for the attention layer that successfully reduces both the compute and memory requirements, achieving wall-clock speedup, while matching the language modeling performance of the baseline Transformer. Our novel MoE mechanism allows SwitchHead to compute up to 8 times fewer attention matrices than the standard Transformer. SwitchHead can also be combined with MoE feedforward layers, resulting in fully-MoE "SwitchAll" Transformers. For our 262M parameter model trained on C4, SwitchHead matches the perplexity of standard models with only 44% compute and 27% memory usage. Zero-shot experiments on downstream tasks confirm the performance of SwitchHead, e.g., achieving more than 4% absolute improvements on BliMP compared to the baseline with the equal compute resource.

Live content is unavailable. Log in and register to view live content