Skip to yearly menu bar Skip to main content


Poster

QuaRot: Outlier-Free 4-Bit Inference in Rotated LLMs

Saleh Ashkboos · Amirkeivan Mohtashami · Maximilian Croci · Bo Li · Pashmina Cameron · Martin Jaggi · Dan Alistarh · Torsten Hoefler · James Hensman

East Exhibit Hall A-C #2111
[ ] [ Project Page ]
Thu 12 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

We introduce QuaRot, a new Quantization scheme based on Rotations, which is able to quantize LLMs end-to-end, including all weights, activations, and KV cache in 4 bits. QuaRot rotates LLMs in a way that removes outliers from the hidden state without changing the output, making quantization easier. This computational invariance is applied to the hidden state (residual) of the LLM, as well as to the activations of the feed-forward components, aspects of the attention mechanism, and to the KV cache. The result is a quantized model where all matrix multiplications are performed in 4 bits, without any channels identified for retention in higher precision. Our 4-bit quantized LLAMA2-70B model has losses of at most 0.47 WikiText-2 perplexity and retains 99% of the zero-shot performance. We also show that QuaRot can provide lossless 6 and 8 bit LLAMA-2 models without any calibration data using round-to-nearest quantization. Code is available at github.com/spcl/QuaRot.

Live content is unavailable. Log in and register to view live content