UNComp: Uncertainty-Aware Long-Context Compressor for Efficient Large Language Model Inference
Xiong, Jing, Shen, Jianghan, Ye, Fanghua, Tao, Chaofan, Wan, Zhongwei, Lu, Jianqiao, Wu, Xun, Zheng, Chuanyang, Guo, Zhijiang, Kong, Lingpeng, Wong, Ngai
–arXiv.org Artificial Intelligence
Deploying large language models (LLMs) is challenging due to their high memory and computational demands, especially during long-context inference. While keyvalue (KV) caching accelerates inference by reusing previously computed keys and values, it also introduces significant memory overhead. Existing KV cache compression methods--such as eviction and merging--typically compress the KV cache after it is generated and overlook the hidden states, failing to improve the speed of the prefilling stage. Additionally, applying a uniform compression rate across different attention heads can harm crucial retrieval heads in needle-in-ahaystack tasks due to excessive compression. In this paper, we propose UNComp, an uncertainty-aware compression scheme that leverages matrix entropy to estimate model uncertainty across layers and heads at the token sequence level. By grouping layers and heads based on their uncertainty, UNComp adaptively compresses both the hidden states and the KV cache. Our method achieves a 1.6 speedup in the prefilling stage and reduces the KV cache to 4.74% of its original size, resulting in a 6.4 increase in throughput and a 1.4 speedup in inference with only a 1.41% performance loss. Remarkably, in needle-in-a-haystack tasks, UNComp outperforms the full-size KV cache even when compressed to 9.38% of its original size. Our approach offers an efficient, training-free Grouped-Query Attention paradigm that can be seamlessly integrated into existing KV cache schemes. The proliferation of large language models (LLMs) has led to unprecedented advancements in natural language processing (Achiam et al., 2023; Kaplan et al., 2020), enabling capabilities ranging from simple text generation to complex reasoning and dialogue.
arXiv.org Artificial Intelligence
Oct-3-2024