Goto

Collaborating Authors

 Sadhukhan, Ranajoy


MagicPIG: LSH Sampling for Efficient LLM Generation

arXiv.org Artificial Intelligence

Large language models (LLMs) with long context windows have gained significant attention. However, the KV cache, stored to avoid re-computation, becomes a bottleneck. Various dynamic sparse or TopK-based attention approximation methods have been proposed to leverage the common insight that attention is sparse. In this paper, we first show that TopK attention itself suffers from quality degradation in certain downstream tasks because attention is not always as sparse as expected. Rather than selecting the keys and values with the highest attention scores, sampling with theoretical guarantees can provide a better estimation for attention output. To make the sampling-based approximation practical in LLM generation, we propose MagicPIG, a heterogeneous system based on Locality Sensitive Hashing (LSH). MagicPIG significantly reduces the workload of attention computation while preserving high accuracy for diverse tasks. MagicPIG stores the LSH hash tables and runs the attention computation on the CPU, which allows it to serve longer contexts and larger batch sizes with high approximation accuracy. MagicPIG can improve decoding throughput by up to $5\times$ across various GPU hardware and achieve 54ms decoding latency on a single RTX 4090 for Llama-3.1-8B-Instruct model with a context of 96k tokens. The code is available at https://github.com/Infini-AI-Lab/MagicPIG.


Memory Mosaics

arXiv.org Artificial Intelligence

This paper presents a learning system architecture, Memory Mosaics, in which multiple associative memories work in concert to carry out a prediction task of interest. Such systems are closely related to memory networks [Weston et al., 2014, Sukhbaatar et al., 2015] and resemble transformers [Vaswani et al., 2017] despite significant differences. Like transformers, Memory Mosaics possesses some of the disentanglement and compositional capabilities that have long eluded machine learning systems [Lake and Baroni, 2018]. Unlike transformers whose internal mechanism are hard to decipher [Olsson et al., 2022, Bietti et al., 2024], Memory Mosaics achieve these capabilities in comparatively transparent ways. The three main contributions of this work are (a) recognizing and exploiting a similarity between smoothing associative memories and self-attention, (b) identifying and illustrating the predictive disentanglement principle which explains how training decomposes the overall task in interesting ways, and (c) showing that this comparatively transparent architecture matches the performance of decoding transformers on a language modeling task. Section 2 describes the basic architecture and outlines its consequences. Section 3 illustrates the predictive disentanglement principle. Section 4 extends these ideas to fully formed memory mosaics.