Systematic Characterization of LLM Quantization: A Performance, Energy, and Quality Perspective
–arXiv.org Artificial Intelligence
Large language models (LLMs) have demonstrated remarkable capabilities across diverse domains, but their heavy resource demands make quantization-reducing precision to lower-bit formats-critical for efficient serving. While many quantization methods exist, a systematic understanding of their performance, energy, and quality tradeoffs in realistic serving conditions remains a gap. In this work, we first develop a fully automated online characterization framework qMeter, and then conduct an in-depth characterization of 11 post-training LLM quantization methods across 4 model sizes (7B-70B) and two GPU architectures (A100, H100). We evaluate quantization at the application, workload, parallelism, and hardware levels under online serving conditions. Our study reveals highly task- and method-dependent tradeoffs, strong sensitivity to workload characteristics, and complex interactions with parallelism and GPU architecture. We further present three optimization case studies illustrating deployment challenges in capacity planning, energy-efficient scheduling, and multi-objective tuning. To the best of our knowledge, this is one of the first comprehensive application-, system-, and hardware-level characterization of LLM quantization from a joint performance, energy, and quality perspective.
arXiv.org Artificial Intelligence
Aug-26-2025
- Country:
- Asia
- Europe > Austria
- Vienna (0.14)
- North America > United States
- Florida > Miami-Dade County
- Miami (0.04)
- Indiana > Tippecanoe County
- Lafayette (0.04)
- West Lafayette (0.04)
- Florida > Miami-Dade County
- Genre:
- Research Report > New Finding (0.68)
- Industry:
- Energy (0.93)
- Technology: