LAG: Lazily Aggregated Gradient for Communication-Efficient Distributed Learning
Tianyi Chen, Georgios Giannakis, Tao Sun, Wotao Yin
–Neural Information Processing Systems
This paper presents a new class of gradient methods for distributed machine learning that adaptively skip the gradient calculations to learn with reduced communication and computation. Simple rules are designed to detect slowly-varying gradients and, therefore, trigger the reuse of outdated gradients. The resultant gradient-based algorithms are termed Lazily A ggregated G radient -- justifying our acronym LAG used henceforth. Theoretically, the merits of this contribution are: i) the convergence rate is the same as batch gradient descent in strongly-convex, convex, and nonconvex cases; and, ii) if the distributed datasets are heterogeneous (quantified by certain measurable constants), the communication rounds needed to achieve a targeted accuracy are reduced thanks to the adaptive reuse of lagged gradients. Numerical experiments on both synthetic and real data corroborate a significant communication reduction compared to alternatives.
Neural Information Processing Systems
Nov-20-2025, 21:29:34 GMT
- Country:
- Asia
- China > Beijing
- Beijing (0.04)
- Middle East > Jordan (0.04)
- China > Beijing
- Europe
- Denmark > Capital Region
- Copenhagen (0.04)
- France > Hauts-de-France
- Germany > Berlin (0.04)
- Spain > Andalusia
- Granada Province > Granada (0.04)
- Denmark > Capital Region
- North America
- Canada > Quebec
- Montreal (0.05)
- United States
- California > Los Angeles County
- Long Beach (0.05)
- Los Angeles (0.28)
- Florida > Broward County
- Fort Lauderdale (0.04)
- Minnesota > Hennepin County
- Minneapolis (0.14)
- Oregon > Benton County
- Corvallis (0.04)
- California > Los Angeles County
- Canada > Quebec
- Oceania > Australia
- New South Wales > Sydney (0.04)
- Victoria > Melbourne (0.04)
- Asia
- Technology: