Loki: Low-rank Keys for Efficient Sparse Attention
–Neural Information Processing Systems
Inference on large language models (LLMs) can be expensive in terms of the compute and memory costs involved, especially when long sequence lengths are used. In particular, the self-attention mechanism used in LLM inference contributes significantly to these costs, which has sparked an interest in approximating the selfattention computation to reduce such costs. In this work, we propose to approximate self-attention by focusing on the dimensionality of key vectors computed in the attention block. Our analysis reveals that key vectors lie in a significantly lowerdimensional space, consistently across several datasets and models. Exploiting this observation, we propose Loki, a novel sparse attention method that ranks and selects tokens in the KV-cache based on attention scores computed in low-dimensional space. Our evaluations show that Loki is able to speed up the attention computation due to reduced data movement (load/store) and compute costs while maintaining the efficacy of the models better than other popular approximation methods.
Neural Information Processing Systems
Mar-18-2025, 20:59:36 GMT
- Country:
- North America > United States > Maryland > Prince George's County > College Park (0.14)
- Industry:
- Energy (0.46)
- Information Technology (0.45)
- Technology: