Communication-Efficient Stochastic Distributed Learning
Ren, Xiaoxing, Bastianello, Nicola, Johansson, Karl H., Parisini, Thomas
–arXiv.org Artificial Intelligence
We address distributed learning problems, both nonconvex and convex, over undirected networks. In particular, we design a novel algorithm based on the distributed Alternating Direction Method of Multipliers (ADMM) to address the challenges of high communication costs, and large datasets. Our design tackles these challenges i) by enabling the agents to perform multiple local training steps between each round of communications; and ii) by allowing the agents to employ stochastic gradients while carrying out local computations. We show that the proposed algorithm converges to a neighborhood of a stationary point, for nonconvex problems, and of an optimal point, for convex problems. We also propose a variant of the algorithm to incorporate variance reduction thus achieving exact convergence. We show that the resulting algorithm indeed converges to a stationary (or optimal) point, and moreover that local training accelerates convergence. We thoroughly compare the proposed algorithms with the state of the art, both theoretically and through numerical results.
arXiv.org Artificial Intelligence
Jan-23-2025
- Country:
- Europe
- Denmark (0.14)
- Italy (0.14)
- Sweden (0.14)
- United Kingdom (0.14)
- North America > United States (0.14)
- Europe
- Genre:
- Research Report (1.00)
- Industry:
- Education (1.00)
- Energy > Power Industry (0.46)
- Technology: