AdmTree: Compressing Lengthy Context with Adaptive Semantic Trees
Li, Yangning, Chen, Shaoshen, Li, Yinghui, Chen, Yankai, Zheng, Hai-Tao, Wang, Hui, Jiang, Wenhao, Yu, Philip S.
–arXiv.org Artificial Intelligence
The quadratic complexity of self-attention constrains Large Language Models (LLMs) in processing long contexts, a capability essential for many advanced applications. Context compression aims to alleviate this computational bottleneck while retaining critical semantic information. However, existing approaches often fall short: explicit methods may compromise local detail, whereas implicit methods can suffer from positional biases, information degradation, or an inability to capture long-range semantic dependencies. We propose AdmTree, a novel framework for adaptive, hierarchical context compression with a central focus on preserving high semantic fidelity while maintaining efficiency. AdmTree dynamically segments input based on information density, utilizing gist tokens to summarize variable-length segments as the leaves of a semantic binary tree. This structure, together with a lightweight aggregation mechanism and a frozen backbone LLM (thereby minimizing new trainable parameters), enables efficient hierarchical abstraction of the context. By preserving fine-grained details alongside global semantic coherence, mitigating positional bias, and dynamically adapting to content, AdmTree robustly retains the semantic information of long contexts.
arXiv.org Artificial Intelligence
Dec-5-2025
- Country:
- Africa > Rwanda
- Asia
- Europe
- North America
- Canada
- British Columbia > Vancouver (0.04)
- Ontario > Toronto (0.04)
- United States
- Illinois > Cook County
- Chicago (0.04)
- Virginia (0.04)
- Illinois > Cook County
- Canada
- South America > Colombia (0.14)
- Genre:
- Research Report > Experimental Study (1.00)
- Industry:
- Government
- Military (0.67)
- Regional Government (0.67)
- Government
- Technology: