SmoothRot: Combining Channel-Wise Scaling and Rotation for Quantization-Friendly LLMs
Czakó, Patrik, Kertész, Gábor, Szénási, Sándor
–arXiv.org Artificial Intelligence
--We present SmoothRot, a novel post-training quantization technique to enhance the efficiency of 4-bit quantization in Large Language Models (LLMs). SmoothRot addresses the critical challenge of massive activation outliers, by integrating channel-wise scaling with Hadamard transformations. Our technique effectively transforms extreme outliers into quantization-friendly activations, significantly improving quantization accuracy. Experiments conducted on popular LLMs (LLaMA2 7B, LLaMA3.1 8B, and Mistral 7B) demonstrate that SmoothRot consistently reduces the performance gap between quantized and FP16 models by approximately 10-30% across language generation and zero-shot reasoning tasks, without introducing additional inference latency. Large Language Models (LLMs) [1]-[3] have shown remarkable capabilities in natural language processing, becoming central to many artificial intelligence applications. However the rapid increase in models sizes required to achieve these impressive results has significantly raised their training and inference costs in terms of time, memory and energy consumption compared to smaller models [4].
arXiv.org Artificial Intelligence
Jul-30-2025