FedAli: Personalized Federated Learning with Aligned Prototypes through Optimal Transport

Ek, Sannara, Wang, Kaile, Portet, François, Lalanda, Philippe, Cao, Jiannong

arXiv.org Artificial Intelligence 

Federated Learning (FL) [34] has significantly enhanced the capabilities of edge devices by creating platforms that enable numerous user devices to collaborate in training machine learning models. In this decentralized framework, the training process is distributed across multiple data sources, with only model parameters being communicated instead of raw, privacy-sensitive user data. Despite its promising potential, integrating FL into pervasive computing environments [51], where performance must be user-centric, presents several challenges. A significant limitation in FL is the heterogeneity inherent in real-world scenarios, where each user's data distribution may vary significantly due to differences in user behavior, local environments, and other contextual factors [19, 27, 28, 26]. This heterogeneity often leads to client drift, where each user's model optimization goals diverge, thereby mitigating the benefits of collaboration through FL. Additionally, conventional FL is server-centric, aiming to optimize a single global model to cover a wide range of users, which may need to be more optimal for individual user needs. While personalized federated learning offers solutions to individual optimum needs, we identify a user-centric requirement for the FL paradigm. Meanwhile, despite the achievement Federated Domain Generalization (FDG) [37, 15] have made, we argue that a personalized client model should also be adaptable to other participated client data distribution.