fedl2p
Appendix A Derivation of Equation (7) 561
Table 5 shows the positioning of FedL2P against existing literature. This personalized policy can either 1) be fixed, e.g. FedEx which randomly samples per-client hyperparameters from learned categorical distributions. Scenarios where it's expensive to train from scratch for a new group of clients, e.g. This is illustrated in Section 4.4 where we adapt a publicly available pretrained Scenarios where it's important to also maintain a global model with high initial accuracy - Note that our approach also does not critically depend on the global model's performance.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > United States > Virginia (0.04)
- Europe > Hungary (0.04)
- Asia > Myanmar > Tanintharyi Region > Dawei (0.04)
FedL2P: Federated Learning to Personalize
Federated learning (FL) research has made progress in developing algorithms for distributed learning of global models, as well as algorithms for local personalization of those common models to the specifics of each client's local data distribution. However, different FL problems may require different personalization strategies, and it may not even be possible to define an effective one-size-fits-all personalization strategy for all clients: Depending on how similar each client's optimal predictor is to that of the global model, different personalization strategies may be preferred. In this paper, we consider the federated meta-learning problem of learning personalization strategies. Specifically, we consider meta-nets that induce the batch-norm and learning rate parameters for each client given local data statistics. By learning these meta-nets through FL, we allow the whole FL network to collaborate in learning a customized personalization strategy for each client. Empirical results show that this framework improves on a range of standard hand-crafted personalization baselines in both label and feature shift situations.
Appendix A Derivation of Equation (7) 561
Table 5 shows the positioning of FedL2P against existing literature. This personalized policy can either 1) be fixed, e.g. FedEx which randomly samples per-client hyperparameters from learned categorical distributions. Scenarios where it's expensive to train from scratch for a new group of clients, e.g. This is illustrated in Section 4.4 where we adapt a publicly available pretrained Scenarios where it's important to also maintain a global model with high initial accuracy - Note that our approach also does not critically depend on the global model's performance.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > United States > Virginia (0.04)
- Europe > Hungary (0.04)
- Asia > Myanmar > Tanintharyi Region > Dawei (0.04)
FedL2P: Federated Learning to Personalize
Federated learning (FL) research has made progress in developing algorithms for distributed learning of global models, as well as algorithms for local personalization of those common models to the specifics of each client's local data distribution. However, different FL problems may require different personalization strategies, and it may not even be possible to define an effective one-size-fits-all personalization strategy for all clients: Depending on how similar each client's optimal predictor is to that of the global model, different personalization strategies may be preferred. In this paper, we consider the federated meta-learning problem of learning personalization strategies. Specifically, we consider meta-nets that induce the batch-norm and learning rate parameters for each client given local data statistics. By learning these meta-nets through FL, we allow the whole FL network to collaborate in learning a customized personalization strategy for each client. Empirical results show that this framework improves on a range of standard hand-crafted personalization baselines in both label and feature shift situations.
FedL2P: Federated Learning to Personalize
Lee, Royson, Kim, Minyoung, Li, Da, Qiu, Xinchi, Hospedales, Timothy, Huszár, Ferenc, Lane, Nicholas D.
Federated learning (FL) research has made progress in developing algorithms for distributed learning of global models, as well as algorithms for local personalization of those common models to the specifics of each client's local data distribution. However, different FL problems may require different personalization strategies, and it may not even be possible to define an effective one-size-fits-all personalization strategy for all clients: depending on how similar each client's optimal predictor is to that of the global model, different personalization strategies may be preferred. In this paper, we consider the federated meta-learning problem of learning personalization strategies. Specifically, we consider meta-nets that induce the batch-norm and learning rate parameters for each client given local data statistics. By learning these meta-nets through FL, we allow the whole FL network to collaborate in learning a customized personalization strategy for each client. Empirical results show that this framework improves on a range of standard hand-crafted personalization baselines in both label and feature shift situations.