Timezone: »

 
Poster
Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing
Zihang Dai · Guokun Lai · Yiming Yang · Quoc V Le

Mon Dec 07 09:00 PM -- 11:00 PM (PST) @ Poster Session 0 #64

With the success of language pretraining, it is highly desirable to develop more efficient architectures of good scalability that can exploit the abundant unlabeled data at a lower cost. To improve the efficiency, we examine the much-overlooked redundancy in maintaining a full-length token-level presentation, especially for tasks that only require a single-vector presentation of the sequence. With this intuition, we propose Funnel-Transformer which gradually compresses the sequence of hidden states to a shorter one and hence reduces the computation cost. More importantly, by re-investing the saved FLOPs from length reduction in constructing a deeper or wider model, we further improve the model capacity. In addition, to perform token-level predictions as required by common pretraining objectives, Funnel-Transformer is able to recover a deep representation for each token from the reduced hidden sequence via a decoder. Empirically, with comparable or fewer FLOPs, Funnel-Transformer outperforms the standard Transformer on a wide variety of sequence-level prediction tasks, including text classification, language understanding, and reading comprehension.

Author Information

Zihang Dai (Google Brain)
Guokun Lai (Carnegie Mellon University)
Yiming Yang (CMU)
Quoc V Le (Google)

More from the Same Authors