factorized-fl
- North America > Canada > Ontario > Toronto (0.14)
- North America > United States > Virginia (0.04)
- Asia > South Korea > Seoul > Seoul (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
Factorized-FL: Personalized Federated Learning with Parameter Factorization & Similarity Matching
In real-world federated learning scenarios, participants could have their own personalized labels incompatible with those from other clients, due to using different label permutations or tackling completely different tasks or domains. However, most existing FL approaches cannot effectively tackle such extremely heterogeneous scenarios since they often assume that (1) all participants use a synchronized set of labels, and (2) they train on the same tasks from the same domain. In this work, to tackle these challenges, we introduce Factorized-FL, which allows to effectively tackle label-and task-heterogeneous federated learning settings by factorizing the model parameters into a pair of rank-1 vectors, where one captures the common knowledge across different labels and tasks and the other captures knowledge specific to the task for each local model. Moreover, based on the distance in the client-specific vector space, Factorized-FL performs a selective aggregation scheme to utilize only the knowledge from the relevant participants for each client. We extensively validate our method on both label-and domain-heterogeneous settings, on which it outperforms the state-of-the-art personalized federated learning methods.
e7feb9dbd9a94b6c552fc403fcebf2ef-Supplemental-Conference.pdf
Organization We provide in-depth descriptions for our algorithms, experimental setups, i.e. dataset configurations, implementation & training details, and additional experimental results & analysis that Section B: We describe dataset configurations for label-and domain-heterogenous scenarios. Section C: We elaborate on implementation and training details for our methods and the baselines. Section D: We provide additional experimental results and analysis. In this section, we describe detailed configurations for datasets that we used in label-and domain-heterogeneous scenarios. These permutations are randomly generated based on different seeds.
e7feb9dbd9a94b6c552fc403fcebf2ef-Supplemental-Conference.pdf
Organization We provide in-depth descriptions for our algorithms, experimental setups, i.e. dataset configurations, implementation & training details, and additional experimental results & analysis that Section B: We describe dataset configurations for label-and domain-heterogenous scenarios. Section C: We elaborate on implementation and training details for our methods and the baselines. Section D: We provide additional experimental results and analysis. In this section, we describe detailed configurations for datasets that we used in label-and domain-heterogeneous scenarios. These permutations are randomly generated based on different seeds.
- North America > Canada > Ontario > Toronto (0.14)
- North America > United States > Virginia (0.04)
- Asia > South Korea > Seoul > Seoul (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
Factorized-FL: Personalized Federated Learning with Parameter Factorization & Similarity Matching
In real-world federated learning scenarios, participants could have their own personalized labels incompatible with those from other clients, due to using different label permutations or tackling completely different tasks or domains. However, most existing FL approaches cannot effectively tackle such extremely heterogeneous scenarios since they often assume that (1) all participants use a synchronized set of labels, and (2) they train on the same tasks from the same domain. In this work, to tackle these challenges, we introduce Factorized-FL, which allows to effectively tackle label- and task-heterogeneous federated learning settings by factorizing the model parameters into a pair of rank-1 vectors, where one captures the common knowledge across different labels and tasks and the other captures knowledge specific to the task for each local model. Moreover, based on the distance in the client-specific vector space, Factorized-FL performs a selective aggregation scheme to utilize only the knowledge from the relevant participants for each client. We extensively validate our method on both label- and domain-heterogeneous settings, on which it outperforms the state-of-the-art personalized federated learning methods.