CANITA: Faster Rates for Distributed Convex Optimization with Communication Compression

Neural Information Processing Systems 

Due to the high communication cost in distributed and federated learning, methods relying on compressed communication are becoming increasingly popular. Besides, the best theoretically and practically performing gradient-type methods invariably rely on some form of acceleration/momentum to reduce the number of communications (faster convergence), e.g., Nesterov's accelerated gradient descent [31, 32] and Adam [14]. In order to combine the benefits of communication compression and convergence acceleration, we propose a \emph{compressed and accelerated} gradient method based on ANITA [20] for distributed optimization, which we call CANITA. Our results show that as long as the number of devices n is large (often true in distributed/federated learning), or the compression \omega is not very high, CANITA achieves the faster convergence rate O\Big(\sqrt{\frac{L}{\epsilon}}\Big), i.e., the number of communication rounds is O\Big(\sqrt{\frac{L}{\epsilon}}\Big) (vs. As a result, CANITA enjoys the advantages of both compression (compressed communication in each round) and acceleration (much fewer communication rounds).