Skip to yearly menu bar Skip to main content


Poster

Towards Next-Level Post-Training Quantization of Hyper-Scale Transformers

Junhan Kim · Chungman Lee · Eulrang Cho · Kyungphil Park · Ho-young Kim · Joonyoung Kim · Yongkweon Jeon

East Exhibit Hall A-C #2409
[ ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

With the increasing complexity of generative AI models, post-training quantization (PTQ) has emerged as a promising solution for deploying hyper-scale models on edge devices such as mobile devices and TVs.Existing PTQ schemes, however, consume considerable time and resources, which could be a bottleneck in real situations where frequent model updates and multiple hyperparameter tunings are required.As a cost-effective alternative, learning-free PTQ schemes have been proposed. Still, the performance is somewhat limited because they cannot consider inter-layer dependency within the attention module, a significant feature of Transformers.In this paper, we thus propose a novel PTQ algorithm that balances accuracy and efficiency.The key idea of the proposed algorithm called aespa is to perform quantization layer-wise for efficiency while considering cross-layer dependency to preserve the attention score.Through extensive experiments on various language models and complexity analysis, we demonstrate that aespa is accurate and efficient in quantizing Transformer models.

Live content is unavailable. Log in and register to view live content