EoRA: Training-free Compensation for Compressed LLM with Eigenspace Low-Rank Approximation

Liu, Shih-Yang, Yang, Huck, Wang, Chien-Yi, Fung, Nai Chit, Yin, Hongxu, Sakr, Charbel, Muralidharan, Saurav, Cheng, Kwang-Ting, Kautz, Jan, Wang, Yu-Chiang Frank, Molchanov, Pavlo, Chen, Min-Hung

arXiv.org Artificial Intelligence 

Although Large Language Models (LLMs) exhibit superior performance across diverse applications, their empirical deployment remains challenging due to their associated considerable model size and high inference costs. To mitigate these emerging challenges, model compression research such as post-training compression (Ashkboos et al., 2024; Ma et al., 2023) and compression-aware training (Alvarez & Salzmann, 2017; Lym et al., 2019; Liu et al., 2024, 2023c) has been extensively explored to reduce the computational resource demands of serving LLMs (Zhu et al., 2023). However, most existing methods either incur significant accuracy degradation compared to uncompressed models or have high training time. Additionally, their flexibility is often limited by a discrete set of compression formats (e.g., 2:4 sparsity, 3/4-bit quantization), making it challenging to meet the diverse capacity and efficiency requirements of different users. To overcome the above flexibility limitation, we re-formulate the model compression problem into the customized compensation problem: Given a compressed model, we aim to introduce residual low-rank paths to compensate for compression errors under customized requirements from users, such as tasks, compression ratios, etc. Rather than focusing solely on producing compressed models with minimal performance degradation, by incorporating these residual paths, the compensated model gains greater flexibility in adjusting overall capacity, without being constrained by specific compression formats.