Timezone: »

Squeezeformer: An Efficient Transformer for Automatic Speech Recognition
Sehoon Kim · Amir Gholami · Albert Shaw · Nicholas Lee · Karttikeya Mangalam · Jitendra Malik · Michael Mahoney · Kurt Keutzer

Thu Dec 01 09:00 AM -- 11:00 AM (PST) @ Hall J #620

The recently proposed Conformer model has become the de facto backbone model for various downstream speech tasks based on its hybrid attention-convolution architecture that captures both local and global features. However, through a series of systematic studies, we find that the Conformer architecture’s design choices are not optimal. After re-examining the design choices for both the macro and micro-architecture of Conformer, we propose Squeezeformer which consistently outperforms the state-of-the-art ASR models under the same training schemes. In particular, for the macro-architecture, Squeezeformer incorporates (i) the Temporal U-Net structure which reduces the cost of the multi-head attention modules on long sequences, and (ii) a simpler block structure of multi-head attention or convolution modules followed up by feed-forward module instead of the Macaron structure proposed in Conformer. Furthermore, for the micro-architecture, Squeezeformer (i) simplifies the activations in the convolutional block, (ii) removes redundant Layer Normalization operations, and (iii) incorporates an efficient depthwise down-sampling layer to efficiently sub-sample the input signal. Squeezeformer achieves state-of-the-art results of 7.5%, 6.5%, and 6.0% word-error-rate (WER) on LibriSpeech test-other without external language models, which are 3.1%, 1.4%, and 0.6% better than Conformer-CTC with the same number of FLOPs. Our code is open-sourced and available online.

Author Information

Sehoon Kim (University of California Berkeley)
Amir Gholami (University of California, Berkeley)
Albert Shaw (Google)
Nicholas Lee (University of California, Berkeley)
Karttikeya Mangalam (UC Berkeley (BAIR))

I’m a first year PhD student in Computer Science at the Department of Electrical Engineering & Computer Sciences (EECS) at University of California, Berkeley where I’m jointly advised by Prof. Jitendra Malik and Prof. Yi Ma.

Jitendra Malik (University of California at Berkley)
Michael Mahoney (UC Berkeley)
Kurt Keutzer (EECS, UC Berkeley)

More from the Same Authors