Skip to yearly menu bar Skip to main content


Oral Poster

DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs

Haokun Lin · Haobo Xu · Yichen WU · Jingzhi Cui · Yingtao Zhang · Linzhan Mou · Linqi Song · Zhenan Sun · Ying Wei

East Exhibit Hall A-C #1911
[ ] [ Project Page ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST
 
Oral presentation: Oral Session 3D: Natural Language Processing
Thu 12 Dec 10 a.m. PST — 11 a.m. PST

Abstract:

Quantization of large language models (LLMs) faces significant challenges, particularly due to the presence of outlier activations that impede efficient low-bit representation. Traditional approaches predominantly address Normal Outliers, which are activations across all tokens with relatively large magnitudes. However, these methods struggle with smoothing Massive outliers that display significantly larger values, which leads to significant performance degradation in low-bit quantization. In this paper, we introduce DuQuant, a novel approach that utilizes rotation and permutation transformations to more effectively mitigate both massive and normal outliers. First, DuQuant starts by constructing rotation matrices, using specific outlier dimensions as prior knowledge, to redistribute outliers to adjacent channels by block-wise rotation. Second, We further employ a zigzag permutation to balance the distribution of outliers across blocks, thereby reducing block-wise variance. A subsequent rotation further smooths the activation landscape, enhancing model performance. DuQuant simplifies the quantization process and excels in managing outliers, outperforming the state-of-the-art baselines across various sizes and types of LLMs on multiple tasks, even with 4-bit weight-activation quantization.

Live content is unavailable. Log in and register to view live content