The Geometry of LLM Quantization: GPTQ as Babai's Nearest Plane Algorithm
Chen, Jiale, Shabanzadeh, Yalda, Crnčević, Elvir, Hoefler, Torsten, Alistarh, Dan
–arXiv.org Artificial Intelligence
Quantizing the weights of large language models (LLMs) from 16-bit to lower bitwidth is the de facto approach to deploy massive transformers onto more affordable accelerators. While GPTQ emerged as one of the standard methods for one-shot post-training quantization at LLM scale, its inner workings are described as a sequence of ad-hoc algebraic updates that obscure geometric meaning or worst-case guarantees. In this work, we show that, when executed back-to-front (from the last to first dimension) for a linear layer, GPTQ is mathematically identical to Babai's nearest plane algorithm for the classical closest vector problem (CVP) on a lattice defined by the Hessian matrix of the layer's inputs. This equivalence is based on a sophisticated mathematical argument, and has two analytical consequences: first, the GPTQ error propagation step gains an intuitive geometric interpretation; second, GPTQ inherits the error upper bound of Babai's algorithm under the assumption that no weights are clipped. Leveraging this bound, we design post-training quantization methods that avoid clipping, and outperform the original GPTQ. In addition, we provide efficient GPU inference kernels for the resulting representation. Taken together, these results place GPTQ on a firm theoretical footing and open the door to importing decades of progress in lattice algorithms towards the design of future quantization algorithms for billion-parameter models.
arXiv.org Artificial Intelligence
Oct-2-2025
- Country:
- Europe
- Austria (0.04)
- Czechia > South Moravian Region
- Brno (0.04)
- Netherlands > North Holland
- Amsterdam (0.04)
- Switzerland > Zürich
- Zürich (0.14)
- North America > United States
- New York > New York County > New York City (0.04)
- Europe
- Genre:
- Research Report (0.50)
- Technology: