Federated Representation Learning in the Under-Parameterized Regime

Liu, Renpu, Shen, Cong, Yang, Jing

arXiv.org Artificial Intelligence 

In the development of machine learning (ML), the role of representation learning has become increasingly essential. It transforms raw data into meaningful features, reveals hidden patterns and insights in data, and facilitates efficient learning of various ML tasks such as meta-learning (Tripuraneni et al., 2021), multi-task learning (Wang et al., 2016a), and few-shot learning (Du et al., 2020). Recently, representation learning has been introduced to the federated learning (FL) framework to cope with the heterogeneous local datasets at participating clients (Liang et al., 2020). In the FL setting, it often assumes that all clients share a common representation, which works in conjunction with personalized local heads to realize personalized prediction while harnessing the collective training power (Arivazhagan et al., 2019; Collins et al., 2021; Zhong et al., 2022; Shen et al., 2023). Existing theoretical analysis of representation learning usually assumes the adopted model is overparameterized to almost fit the ground-truth model (Tripuraneni et al., 2021; Wang et al., 2016a). While this may be valid for expressive models like Deep Neural Networks (He et al., 2016; Liu et al., 2017) or Large Language Models (OpenAI, 2023; Touvron et al., 2023), it may be too restrictive for FL on resource-constrained devices, as adopting over-parameterized models in such a framework faces several significant challenges, as elaborated below.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found