Robust Decentralized Learning Using ADMM with Unreliable Agents

Li, Qunwei, Kailkhura, Bhavya, Goldhahn, Ryan, Ray, Priyadip, Varshney, Pramod K.

arXiv.org Machine Learning 

Many machine learning problems can be formulated as consensus optimization problems which can be solved efficiently via a cooperative multi-agent system. However, the agents in the system can be unreliable due to a variety of reasons: noise, faults and attacks. Thus, providing falsified data leads the optimization process in a wrong direction, and degrades the performance of distributed machine learning algorithms. This paper considers the problem of decentralized learning using ADMM in the presence of unreliable agents. First, we rigorously analyze the effect of falsified updates (in ADMM learning iterations) on the convergence behavior of multi-agent system. We show that the algorithm linearly converges to a neighborhood of the optimal solution under certain conditions and characterize the neighborhood size analytically. Next, we provide guidelines for network structure design to achieve a faster convergence. We also provide necessary conditions on the falsified updates for exact convergence to the optimal solution. Finally, to mitigate the influence of unreliable agents, we propose a robust variant of ADMM and show its resilience to unreliable agents.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found