When Compression Meets Model Compression: Memory-Efficient Double Compression for Large Language Models
Wang, Weilan, Mao, Yu, Tang, Dongdong, Du, Hongchao, Guan, Nan, Xue, Chun Jason
–arXiv.org Artificial Intelligence
Large language models (LLMs) exhibit excellent performance in various tasks. However, the memory requirements of LLMs present a great challenge when deploying on memory-limited devices, even for quantized LLMs. This paper introduces a framework to compress LLM after quantization further, achieving about 2.2x compression ratio. A compression-aware quantization is first proposed to enhance model weight compressibility by re-scaling the model parameters before quantization, followed by a pruning method to improve further. Upon this, we notice that decompression can be a bottleneck during practical scenarios. We then give a detailed analysis of the trade-off between memory usage and latency brought by the proposed method. A speed-adaptive method is proposed to overcome it. The experimental results show inference with the compressed model can achieve a 40% reduction in memory size with negligible loss in accuracy and inference speed.
arXiv.org Artificial Intelligence
Feb-21-2025