EQuARX: Efficient Quantized AllReduce in XLA for Distributed Machine Learning Acceleration
Ahmed, Ibrahim, Schaefer, Clemens, Tabak, Gil, Vnukov, Denis, Zhang, Zenong, chern, Felix, Yevtushenko, Anatoliy, Davis, Andy
–arXiv.org Artificial Intelligence
--While Large Language Models (LLMs) have become highly influential, their enormous scale presents significant deployment challenges. Efficiently serving these models typically requires distributing them across numerous accelerator devices, which introduces substantial performance overhead from inter-device communication (collectives). While model quantization has been widely adopted to reduce the memory and compute requirements of LLM weights and activations with minimal quality impact, applying quantization directly to collectives like AllReduce is inherently difficult due to the inter-device summation involved, which can lead to numerical instability or significant error accumulation. In this work, we present a native dynamic block-wise efficient quantized AllReduce within the XLA compiler for TPUs (EQuARX). By using TPU-friendly quantization and deep pipelining of communication and compute, EQuARX with int8 precision achieves a 1. 8 speedup over baseline BF16 AllReduce across various network topologies. Furthermore, EQuARX accelerates the prefill stage of Gemma 3 27B by 1.25 and Gemma 3 12B by 1.1, respectively, with small to negligible impact on quality.
arXiv.org Artificial Intelligence
Jun-24-2025
- Country:
- Europe > Italy
- Calabria > Catanzaro Province > Catanzaro (0.04)
- North America
- Canada (0.04)
- United States > Massachusetts
- Middlesex County > Cambridge (0.04)
- Europe > Italy
- Genre:
- Research Report (0.65)
- Technology: