Learning What to Remember: Adaptive Probabilistic Memory Retention for Memory-Efficient Language Models
Rafiuddin, S M, Khan, Muntaha Nujat
–arXiv.org Artificial Intelligence
Transformer attention scales quadratically with sequence length O(n^2), limiting long-context use. We propose Adaptive Retention, a probabilistic, layer-wise token selection mechanism that learns which representations to keep under a strict global budget M. Retention is modeled with Bernoulli gates trained via a Hard-Concrete/variational relaxation and enforced with a simple top-M rule at inference, making the method differentiable and drop-in for standard encoders. Across classification, extractive QA, and long-document summarization, keeping only 30-50% of tokens preserves >= 95% of full-model performance while cutting peak memory by ~35-45% and improving throughput by up to ~1.8x. This architecture-agnostic approach delivers practical long-context efficiency without modifying base attention or task heads.
arXiv.org Artificial Intelligence
Oct-13-2025
- Country:
- Europe
- Belgium > Brussels-Capital Region
- Brussels (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.04)
- Belgium > Brussels-Capital Region
- North America
- Mexico > Mexico City
- Mexico City (0.04)
- United States
- California > Santa Clara County
- Palo Alto (0.04)
- Louisiana > Orleans Parish
- New Orleans (0.04)
- Oklahoma > Payne County
- Stillwater (0.14)
- Oregon > Multnomah County
- Portland (0.04)
- Washington > King County
- Seattle (0.04)
- California > Santa Clara County
- Mexico > Mexico City
- Europe
- Genre:
- Research Report (0.42)
- Technology: