O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models
–Neural Information Processing Systems
Large Language Models (LLMs), despite their recent impressive accomplishments, are notably cost-prohibitive to deploy, particularly for applications involving longcontent generation, such as dialogue systems and story writing. Often, a large amount of transient state information, referred to as the KV cache, is stored in GPU memory in addition to model parameters, scaling linearly with the sequence length and batch size. In this paper, we introduce a novel approach for implementing the KV cache which significantly reduces its memory footprint. Our approach is based on the noteworthy observation that a small portion of tokens contributes most of the value when computing attention scores.
Neural Information Processing Systems
May-24-2025, 11:23:02 GMT
- Country:
- North America > United States
- California (0.46)
- Minnesota > Hennepin County
- Minneapolis (0.14)
- North America > United States
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Information Technology (0.46)
- Technology: