Determining Layer-wise Sparsity for Large Language Models Through a Theoretical Perspective

Huang, Weizhong, Zhang, Yuxin, Zheng, Xiawu, Chao, Fei, Ji, Rongrong

arXiv.org Artificial Intelligence 

In this paper, we address the challenge of determining the layer-wise sparsity rates of large language models (LLMs) through a theoretical perspective. Specifically, we identify a critical issue of "reconstruction error explosion" in existing LLMs sparsification methods. This refers to the cumulative effect of reconstruction errors throughout the sparsification process, where errors from earlier layers propagate and amplify in subsequent layers. Through theoretical analysis, we derive a simple yet effective approach to layer-wise sparsity allocation that mitigates this issue. Our method uses a monotonically increasing arithmetic progression, reducing the process of determining sparsity rates for multiple layers to the determination of a single common difference hyperparameter. Remarkably, this allows for the optimal layer-wise sparsity rates to be identified with just a few trials. Both our theoretical analysis and experimental results demonstrate that this sparsity allocation scheme is near optimal. Extensive experiments show that our method significantly improves the performance of sparse LLMs across various architectures, outperforming existing layer-wise sparsity methods. Furthermore, it enhances the performance of various compression techniques and is applicable to vision and multimodal models. Notably, our method achieves a reduction of 52.10 in perplexity for the 70 % sparse LLaMA2-7B model obtained via Wanda, improves average zero-shot accuracy by 10.50 %, and delivers speedups of 2.63 and 2.23 on CPU and GPU, respectively. All methods face the problem of "reconstruction error explosion"; however, our method achieves lower reconstruction error compared to other methods. The metric-based method calculates the importance of each layer to obtain the sparsity rate. However, this method is heuris-tically designed by human experts and is not optimal. And the search-based method requires a large number of iterative searches, which is time-consuming.