Robustifying Learning-Augmented Caching Efficiently without Compromising 1-Consistency
Chen, Peng, Zhao, Hailiang, Zhang, Jiaji, Tang, Xueyan, Wang, Yixuan, Deng, Shuiguang
–arXiv.org Artificial Intelligence
The online caching problem aims to minimize cache misses when serving a sequence of requests under a limited cache size. While naive learning-augmented caching algorithms achieve ideal $1$-consistency, they lack robustness guarantees. Existing robustification methods either sacrifice $1$-consistency or introduce excessive computational overhead. In this paper, we introduce Guard, a lightweight robustification framework that enhances the robustness of a broad class of learning-augmented caching algorithms to $2H_{k-1} + 2$, while preserving their $1$-consistency. Guard achieves the current best-known trade-off between consistency and robustness, with only O(1) additional per-request overhead, thereby maintaining the original time complexity of the base algorithm. Extensive experiments across multiple real-world datasets and prediction models validate the effectiveness of Guard in practice.
arXiv.org Artificial Intelligence
Nov-17-2025
- Country:
- Asia
- China > Jiangsu Province
- Nanjing (0.04)
- Singapore (0.04)
- China > Jiangsu Province
- Europe > Germany (0.04)
- North America > United States
- Texas > Brazos County > College Station (0.04)
- Asia
- Genre:
- Research Report > New Finding (0.67)
- Industry:
- Education (0.46)
- Technology:
- Information Technology
- Artificial Intelligence
- Machine Learning > Neural Networks (0.46)
- Representation & Reasoning (1.00)
- Communications (0.67)
- Data Science (0.92)
- Software (0.67)
- Artificial Intelligence
- Information Technology