R1-Compress: Long Chain-of-Thought Compression via Chunk Compression and Search
Wang, Yibo, Luo, Haotian, Yao, Huanjin, Huang, Tiansheng, He, Haiying, Liu, Rui, Tan, Naiqiang, Huang, Jiaxing, Cao, Xiaochun, Tao, Dacheng, Shen, Li
–arXiv.org Artificial Intelligence
Chain-of-Thought (CoT) reasoning enhances large language models (LLMs) by enabling step-by-step problem-solving, yet its extension to Long-CoT introduces substantial computational overhead due to increased token length. Existing compression approaches -- instance-level and token-level -- either sacrifice essential local reasoning signals like reflection or yield incoherent outputs. To address these limitations, we propose R1-Compress, a two-stage chunk-level compression framework that preserves both local information and coherence. Our method segments Long-CoT into manageable chunks, applies LLM-driven inner-chunk compression, and employs an inter-chunk search mechanism to select the short and coherent sequence. Experiments on Qwen2.5-Instruct models across MATH500, AIME24, and GPQA-Diamond demonstrate that R1-Compress significantly reduces token usage while maintaining comparable reasoning accuracy. On MATH500, R1-Compress achieves an accuracy of 92.4%, with only a 0.6% drop compared to the Long-CoT baseline, while reducing token usage by about 20%. Source code will be available at https://github.com/w-yibo/R1-Compress
arXiv.org Artificial Intelligence
Nov-14-2025
- Country:
- Asia > China > Guangdong Province > Shenzhen (0.04)
- Genre:
- Research Report > New Finding (0.46)
- Technology: