LLMComp: A Language Modeling Paradigm for Error-Bounded Scientific Data Compression (Technical Report)
Li, Guozhong, Alhumaidi, Muhannad, Skiadopoulos, Spiros, Kalnis, Panos
–arXiv.org Artificial Intelligence
The rapid growth of high-resolution scientific simulations and observation systems is generating massive spatiotemporal datasets, making efficient, error-bounded compression increasingly important. Meanwhile, decoder-only large language models (LLMs) have demonstrated remarkable capabilities in modeling complex sequential data. In this paper, we propose LLMCOMP, a novel lossy compression paradigm that leverages decoder-only large LLMs to model scientific data. LLMCOMP first quantizes 3D fields into discrete tokens, arranges them via Z-order curves to preserve locality, and applies coverage-guided sampling to enhance training efficiency. An autoregressive transformer is then trained with spatial-temporal embeddings to model token transitions. During compression, the model performs top-k prediction, storing only rank indices and fallback corrections to ensure strict error bounds. Experiments on multiple reanalysis datasets show that LLMCOMP consistently outperforms state-of-the-art compressors, achieving up to 30% higher compression ratios under strict error bounds. These results highlight the potential of LLMs as general-purpose compressors for high-fidelity scientific data.
arXiv.org Artificial Intelligence
Nov-6-2025
- Country:
- Africa
- Eritrea (0.04)
- Middle East
- Sudan (0.04)
- Asia > Middle East
- Saudi Arabia
- Arabian Gulf (0.04)
- Mecca Province > Thuwal (0.04)
- Yemen (0.04)
- Saudi Arabia
- Europe > Greece
- Peloponnese > Tripoli (0.04)
- Indian Ocean
- Arabian Gulf (0.04)
- Red Sea (0.04)
- Africa
- Genre:
- Research Report > New Finding (0.68)
- Technology: