Personalized Federated Learning via Stacking

Cantu-Cervini, Emilio

arXiv.org Artificial Intelligence 

Federated Learning (FL) is an area of research that develops methods to allow multiple parties to collaboratively train machine learning models without exchanging data. First introduced in 2016 by McMahan et al. to allow a large number of edge devices to collaboratively train language models [1], FL has been successfully applied to several domains where for regulatory or privacy reasons models cannot be trained on centralized pooled data. Most FL approaches result in a single collaboratively trained global model that is used by every client for inference. Personalized Federated Learning (PFL) recognizes that in some non-IID contexts performance improvements are possible if each client somehow adapts or personalizes the global model to its data. Approaches range from clients fine-tuning the global model on private data to client clustering, and others discussed in Section 2. In this paper, we build on prior work [2] and explore a simple personalization approach that avoids training a global model which is then personalized. Instead, clients employ privacy-preserving techniques [3] to train a model on their data and make it public to the federation.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found