Lower Bounds and Nearly Optimal Algorithms in Distributed Learning with Communication Compression Xinmeng Huang 1 Yiming Chen 2,3 Wotao Yin

Neural Information Processing Systems 

Recent advances in distributed optimization and learning have shown that communication compression is one of the most effective means of reducing communication. While there have been many results for convergence rates with compressed communication, a lower bound is still missing. Analyses of algorithms with communication compression have identified two abstract properties that guarantee convergence: the unbiased property or the contrac-tive property.