Skip to yearly menu bar Skip to main content


Poster

Intriguing Properties of Quantization at Scale

Arash Ahmadian · Saurabh Dash · Hongyu Chen · Bharat Venkitesh · Zhen Stephen Gou · Phil Blunsom · Ahmet Üstün · Sara Hooker

Great Hall & Hall B1+B2 (level 1) #523

Abstract:

Emergent properties have been widely adopted as a term to describe behavior not present in smaller models but observed in larger models (Wei et al., 2022a). Recent work suggests that the trade-off incurred by quantization is also an emergent property, with sharp drops in performance in models over 6B parameters. In this work, we ask are quantization cliffs in performance solely a factor of scale? Against a backdrop of increased research focus on why certain emergent properties surface at scale, this work provides a useful counter-example. We posit that it is possible to optimize for a quantization friendly training recipe that suppresses large activation magnitude outliers. Here, we find that outlier dimensions are not an inherent product of scale, but rather sensitive to the optimization conditions present during pre-training. This both opens up directions for more efficient quantization, and poses the question of whether other emergent properties are inherent or can be altered and conditioned by optimization and architecture design choices. We successfully quantize models ranging in size from 410M to 52B with minimal degradation in performance.

Chat is not available.