LLM Vocabulary Compression for Low-Compute Environments

Vennam, Sreeram, Joishy, Anish, Kumaraguru, Ponnurangam

arXiv.org Artificial Intelligence 

We present a method to compress the final linear layer of language models, reducing memory usage by up to 3.4x without significant performance loss. By grouping tokens based on Byte Pair Encoding (BPE) merges, we prevent materialisation of the memory-intensive logits tensor. Evaluations on the TinyStories dataset show that our method performs on par with GPT-Neo and GPT2 while significantly improving throughput by up to 3x, making it suitable for low-compute environments.