Poster
Accuracy is Not All You Need
Abhinav Dutta · Sanjeev Krishnan · Nipun Kwatra · Ramachandran Ramjee
East Exhibit Hall A-C #2902
When Large Language Models (LLMs) are compressed using techniques such as quantization, the predominant way to demonstrate the validity of such techniques is by measuring the model's accuracy on various benchmarks. If the accuracies of the baseline model and the compressed model are close, it is assumed that there was negligible degradation in quality. However, even when the accuracy of baseline and compressed model are similar, we observe the phenomenon of flips, wherein answers change from correct to incorrect and vice versa in proportion. We conduct a detailed study of metrics across multiple compression techniques, models and datasets, demonstrating that the behavior of compressed models as visible to end-users is often significantly different from the baseline model, even when accuracy is similar. We further evaluate compressed models qualitatively and quantitatively using MT-Bench and show that compressed models are significantly worse than baseline models in generative tasks. Thus, in addition to accuracy, we argue that compression techniques should also be evaluated using distance metrics. We propose two such metrics, KL-Divergence and flips, and show that they are well correlated.
Live content is unavailable. Log in and register to view live content