Timezone: »

 
Poster
Limits to Depth Efficiencies of Self-Attention
Yoav Levine · Noam Wies · Or Sharir · Hofit Bata · Amnon Shashua

Wed Dec 09 09:00 AM -- 11:00 AM (PST) @ Poster Session 3 #1106

Self-attention architectures, which are rapidly pushing the frontier in natural language processing, demonstrate a surprising depth-inefficient behavior: previous works indicate that increasing the internal representation (network width) is just as useful as increasing the number of self-attention layers (network depth). We theoretically predict a width-dependent transition between depth-efficiency and depth-inefficiency in self-attention. We conduct systematic empirical ablations on networks of depths 6 to 48 that clearly reveal the theoretically predicted behaviors, and provide explicit quantitative suggestions regarding the optimal depth-to-width allocation for a given self-attention network size. The race towards beyond 1-Trillion parameter language models renders informed guidelines for increasing self-attention depth and width in tandem an essential ingredient. Our guidelines elucidate the depth-to-width trade-off in self-attention networks of sizes up to the scale of GPT3 (which is too deep for its size), and beyond, marking an unprecedented width of 30K as optimal for a 1-Trillion parameter self-attention network.

Author Information

Yoav Levine (HUJI)
Noam Wies (Hebrew University of Jerusalem)
Or Sharir (Hebrew University of Jerusalem)
Hofit Bata (Hebrew University of Jerusalem)
Amnon Shashua (Hebrew University of Jerusalem)

More from the Same Authors