Trustworthy Efficient Communication for Distributed Learning using LQ-SGD Algorithm
Li, Hongyang, Bai, Lincen, Wu, Caesar, Chadli, Mohammed, Mammar, Said, Bouvry, Pascal
–arXiv.org Artificial Intelligence
We propose LQ-SGD (Low-Rank Quantized Stochastic Gradient Descent), an efficient communication gradient compression algorithm designed for distributed training. LQ-SGD further develops on the basis of PowerSGD by incorporating the low-rank approximation and log-quantization techniques, which drastically reduce the communication overhead, while still ensuring the convergence speed of training and model accuracy. In addition, LQ-SGD and other compression-based methods show stronger resistance to gradient inversion than traditional SGD, providing a more robust and efficient optimization path for distributed learning systems. With the rapid development of learning models, distributed training has become a fundamental approach to improving model performance and scalability. However, these distributed training systems typically rely on numerous compute nodes working collaboratively, where the synchronization of model parameters and gradients introduces significant communication overhead.
arXiv.org Artificial Intelligence
Jun-24-2025
- Country:
- Europe > France > Île-de-France > Paris > Paris (0.04)
- Genre:
- Research Report (0.83)
- Industry:
- Information Technology > Security & Privacy (0.46)
- Technology: