Training with Fewer Bits: Unlocking Edge LLMs Training with Stochastic Rounding
Liu, Taowen, Andronic, Marta, Gündüz, Deniz, Constantinides, George A.
–arXiv.org Artificial Intelligence
LLM training is resource-intensive. Quantized training improves computational and memory efficiency but introduces quantization noise, which can hinder convergence and degrade model accuracy. Stochastic Rounding (SR) has emerged as a theoretically attractive alternative to deterministic rounding, offering unbiased gradient estimates. However, its interaction with other training factors -- especially batch size -- remains under explored. In this paper, we present a theoretical and empirical study of mini-batch stochastic gradient descent (SGD) with SR, showing that increased batch sizes can compensate for reduced precision during back-propagation. Furthermore, we show that quantizing weights and activations impacts gradient variance in distinct ways. Our experiments validate these theoretical insights.
arXiv.org Artificial Intelligence
Nov-4-2025
- Country:
- Europe > United Kingdom
- England > Greater London > London (0.04)
- North America > Canada
- Europe > United Kingdom
- Genre:
- Research Report (0.50)
- Technology: