Enhancing RAG Efficiency with Adaptive Context Compression
Guo, Shuyu, Zhang, Shuo, Ren, Zhaochun
–arXiv.org Artificial Intelligence
Retrieval-augmented generation (RAG) enhances large language models (LLMs) with external knowledge but incurs significant inference costs due to lengthy retrieved contexts. While context compression mitigates this issue, existing methods apply fixed compression rates, over-compressing simple queries or under-compressing complex ones. We propose Adaptive Context Compression for RAG (ACC-RAG), a framework that dynamically adjusts compression rates based on input complexity, optimizing inference efficiency without sacrificing accuracy. ACC-RAG combines a hierarchical compressor (for multi-granular embeddings) with a context selector to retain minimal sufficient information, akin to human skimming. Evaluated on Wikipedia and five QA datasets, ACC-RAG outperforms fixed-rate methods and matches/unlocks over 4 times faster inference versus standard RAG while maintaining or improving accuracy.
arXiv.org Artificial Intelligence
Sep-25-2025
- Country:
- Africa > Rwanda
- Asia
- China > Shandong Province
- Qingdao (0.04)
- Middle East
- Jordan (0.04)
- UAE > Abu Dhabi Emirate
- Abu Dhabi (0.04)
- Singapore (0.04)
- Thailand > Bangkok
- Bangkok (0.04)
- China > Shandong Province
- Europe
- Austria > Vienna (0.14)
- Belgium > Brussels-Capital Region
- Brussels (0.04)
- France (0.04)
- Netherlands > South Holland
- Leiden (0.04)
- United Kingdom > England
- Greater London > London (0.04)
- North America
- Canada
- Ontario
- Algoma District > Sault Ste. Marie (0.04)
- Toronto (0.04)
- Quebec
- Capitale-Nationale Region
- Quebec City (0.04)
- Québec (0.04)
- Montreal (0.04)
- Capitale-Nationale Region
- Ontario
- Mexico > Mexico City
- Mexico City (0.04)
- United States
- Louisiana > Orleans Parish
- New Orleans (0.05)
- Maryland > Baltimore (0.04)
- Washington > King County
- Seattle (0.04)
- Louisiana > Orleans Parish
- Canada
- Genre:
- Research Report > New Finding (0.68)
- Technology: