Multiplayer Federated Learning: Reaching Equilibrium with Less Communication

Yoon, TaeHo, Choudhury, Sayantan, Loizou, Nicolas

arXiv.org Machine Learning 

Federated Learning (FL) has emerged as a powerful collaborative learning paradigm where multiple clients jointly train a machine learning model without sharing their local data. In the classical FL setting, a central server coordinates multiple clients (e.g., mobile devices, edge devices) to collaboratively learn a shared global model without exchanging their own training data [48, 54, 79, 64]. In this scenario, each client performs local computations on its private data and periodically communicates model updates to the server, which aggregates them to update the global model. This collaborative approach has been successfully applied in various domains, including natural language processing [69, 43], computer vision [70, 63], and healthcare [4, 116]. Despite their success, traditional FL frameworks rely on the key assumption that all participants are fully cooperative and share aligned objectives, collectively working towards optimizing the performance of a shared global model (e.g., minimizing the average of individual loss functions). This assumption overlooks situations where participants have individual objectives, or competitive interests that may not align with the collective goal. Diverse examples of such scenarios have been extensively considered in the game theory literature, including Cournot competition in economics [2], optical networks [91], electricity markets [98], energy consumption control in smart grid [120], or mobile robot control [49]. Despite their relevance, these applications have yet to be associated with FL, presenting an unexplored opportunity to bridge game theory and FL for more robust and realistic frameworks.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found