Timezone: »
Self-attention architectures, which are rapidly pushing the frontier in natural language processing, demonstrate a surprising depth-inefficient behavior: previous works indicate that increasing the internal representation (network width) is just as useful as increasing the number of self-attention layers (network depth). We theoretically predict a width-dependent transition between depth-efficiency and depth-inefficiency in self-attention. We conduct systematic empirical ablations on networks of depths 6 to 48 that clearly reveal the theoretically predicted behaviors, and provide explicit quantitative suggestions regarding the optimal depth-to-width allocation for a given self-attention network size. The race towards beyond 1-Trillion parameter language models renders informed guidelines for increasing self-attention depth and width in tandem an essential ingredient. Our guidelines elucidate the depth-to-width trade-off in self-attention networks of sizes up to the scale of GPT3 (which is too deep for its size), and beyond, marking an unprecedented width of 30K as optimal for a 1-Trillion parameter self-attention network.
Author Information
Yoav Levine (HUJI)
Noam Wies (Hebrew University of Jerusalem)
Or Sharir (Hebrew University of Jerusalem)
Hofit Bata (Hebrew University of Jerusalem)
Amnon Shashua (Hebrew University of Jerusalem)
More from the Same Authors
-
2021 : Neural Tensor Contractions and the Expressive Power of Deep Neural Quantum States »
Or Sharir · Amnon Shashua · Giuseppe Carleo -
2022 : Towards Neural Variational Monte Carlo That Scales Linearly with System Size »
Or Sharir · Garnet Chan · Anima Anandkumar -
2023 Poster: The Learnability of In-Context Learning »
Noam Wies · Yoav Levine · Amnon Shashua -
2016 Poster: Learning a Metric Embedding for Face Recognition using the Multibatch Method »
Oren Tadmor · Tal Rosenwein · Shai Shalev-Shwartz · Yonatan Wexler · Amnon Shashua -
2011 Poster: ShareBoost: Efficient multiclass learning with feature sharing »
Shai Shalev-Shwartz · Yonatan Wexler · Amnon Shashua -
2006 Poster: Nonnegative Sparse PCA »
Ron Zass · Amnon Shashua -
2006 Poster: Doubly Stochastic Normalization for Spectral Clustering »
Ron Zass · Amnon Shashua