More Effective LLM Compressed Tokens with Uniformly Spread Position Identifiers and Compression Loss

Zhao, Runsong, Huang, Pengcheng, Liu, Xinyu, Xiao, Chunyang, Xiao, Tong, Zhu, Jingbo

arXiv.org Artificial Intelligence 

Compressing Transformer inputs into compressd tokens allows running LLMs with improved speed and cost efficiency. Based on the compression method ICAE, we carefully examine the position identifier choices for compressed tokens and also propose a new compression loss. We demonstrate empirically that our proposed methods achieve significantly higher compression ratios (15x compared to 4x for ICAE), while being able to attain comparable reconstruction performance.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found