Goto

Collaborating Authors

 practical low-rank gradient compression


PowerSGD: Practical Low-Rank Gradient Compression for Distributed Optimization

Neural Information Processing Systems

We study gradient compression methods to alleviate the communication bottleneck in data-parallel distributed optimization. Despite the significant attention received, current compression schemes either do not scale well, or fail to achieve the target test accuracy. We propose a low-rank gradient compressor that can i) compress gradients rapidly, ii) efficiently aggregate the compressed gradients using all-reduce, and iii) achieve test performance on par with SGD. The proposed algorithm is the only method evaluated that achieves consistent wall-clock speedups when benchmarked against regular SGD with an optimized communication backend. We demonstrate reduced training times for convolutional networks as well as LSTMs on common datasets.


Reviews: PowerSGD: Practical Low-Rank Gradient Compression for Distributed Optimization

Neural Information Processing Systems

Update: I have carefully read the authors' rebuttal. I have raised by score to 6 from 5 to reflect their clarification about Figure 3 and Table 6. It still seems that the speedups of the current formulation are often not of great practical significance, except for the language model which was able to give 2x wall clock speedup. As another reviewer noted, it is disappointing that the overall training time is not reported in the main paper, instead of the average batch time, as that makes it unclear whether latency times and other overheads between batches might be a significant concern. The author rebuttal notes that Appendix C shows time-to-accuracy, which would be good to mention in the main paper. But those results still appear mixed: for CIFAR10 SGD beats Rank1 and seems only competitive with Ranks 2,4, whereas for Language model all Ranks seem to convincingly beat SGD.


PowerSGD: Practical Low-Rank Gradient Compression for Distributed Optimization

Neural Information Processing Systems

We study gradient compression methods to alleviate the communication bottleneck in data-parallel distributed optimization. Despite the significant attention received, current compression schemes either do not scale well, or fail to achieve the target test accuracy. We propose a low-rank gradient compressor that can i) compress gradients rapidly, ii) efficiently aggregate the compressed gradients using all-reduce, and iii) achieve test performance on par with SGD. The proposed algorithm is the only method evaluated that achieves consistent wall-clock speedups when benchmarked against regular SGD with an optimized communication backend. We demonstrate reduced training times for convolutional networks as well as LSTMs on common datasets.

  optimization, powersgd, practical low-rank gradient compression

PowerSGD: Practical Low-Rank Gradient Compression for Distributed Optimization

Vogels, Thijs, Karimireddy, Sai Praneeth, Jaggi, Martin

Neural Information Processing Systems

We study gradient compression methods to alleviate the communication bottleneck in data-parallel distributed optimization. Despite the significant attention received, current compression schemes either do not scale well, or fail to achieve the target test accuracy. We propose a low-rank gradient compressor that can i) compress gradients rapidly, ii) efficiently aggregate the compressed gradients using all-reduce, and iii) achieve test performance on par with SGD. The proposed algorithm is the only method evaluated that achieves consistent wall-clock speedups when benchmarked against regular SGD with an optimized communication backend. We demonstrate reduced training times for convolutional networks as well as LSTMs on common datasets.