Neurocache: Efficient Vector Retrieval for Long-range Language Modeling

Safaya, Ali, Yuret, Deniz

arXiv.org Artificial Intelligence 

This paper introduces Neurocache, an approach to extend the effective context size of large language models (LLMs) using an external vector cache to store its past states. Like recent vector retrieval approaches, Neurocache uses an efficient k-nearest-neighbor (kNN) algorithm to retrieve relevant past states and incorporate them into the attention process. Neurocache improves upon previous methods by (1) storing compressed states, which reduces cache size; (2) performing a single retrieval operation per token which increases inference speed; and (3) extending the retrieval window to neighboring states, which improves both language modeling and downstream task accuracy. Our experiments show the effectiveness of Neurocache both for models trained from scratch and for pre-trained models such as Llama2-Figure 1: Performance and Scalability of Neurocache 7B and Mistral-7B when enhanced with the vs. Memorizing Transformers (Wu et al., 2022) on cache mechanism. We also compare Neurocache PG-19: The graph illustrates Neurocache's consistently with text retrieval methods and show lower token perplexity and faster inference times across improvements in single-document questionanswering various cache sizes on the Project Gutenberg-19 dataset, and few-shot learning tasks.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found