Distributed Event-Based Learning via ADMM
Er, Guner Dilsad, Trimpe, Sebastian, Muehlebach, Michael
We consider a distributed learning problem, where agents minimize a global objective function by exchanging information over a network. Our approach has two distinct features: (i) It substantially reduces communication by triggering communication only when necessary, and (ii) it is agnostic to the data-distribution among the different agents. We can therefore guarantee convergence even if the local data-distributions of the agents are arbitrarily distinct. We analyze the convergence rate of the algorithm and derive accelerated convergence rates in a convex setting. We also characterize the effect of communication drops and demonstrate that our algorithm is robust to communication failures. The article concludes by presenting numerical results from a distributed LASSO problem, and distributed learning tasks on MNIST and CIFAR-10 datasets. The experiments underline communication savings of 50% or more due to the event-based communication strategy, show resilience towards heterogeneous data-distributions, and highlight that our approach outperforms common baselines such as FedAvg, FedProx, and FedADMM.
May-17-2024
- Country:
- Europe > Germany
- Baden-Württemberg > Tübingen Region > Tübingen (0.14)
- North America
- Canada > Ontario
- Toronto (0.14)
- United States (0.28)
- Canada > Ontario
- Europe > Germany
- Genre:
- Research Report (0.82)
- Technology: