A Generalized Meta Federated Learning Framework with Theoretical Convergence Guarantees
Jamali, Mohammad Vahid, Saber, Hamid, Bae, Jung Hyun
–arXiv.org Artificial Intelligence
Meta federated learning (FL) is a personalized variant of FL, where multiple agents collaborate on training an initial shared model without exchanging raw dat a samples. The initial model should be trained in a way that current or new agents can easily adapt it to their local datasets after one or a few fine-tuning steps, thus improving the model personaliza tion. Conventional meta FL approaches minimize the average loss of agents on the local models obtai ned after one step of fine-tuning. In practice, agents may need to apply several fine-tuning steps to adapt the global model to their local data, especially under highly heterogeneous data dis tributions across agents. To this end, we present a generalized framework for the meta FL by minimizin g the average loss of agents on their local model after any arbitrary number ν of fine-tuning steps. For this generalized framework, we present a variant of the well-known federated averaging ( FedAvg) algorithm and conduct a comprehensive theoretical convergence analysis to charac terize the convergence speed as well as behavior of the meta loss functions in both the exact and appr oximated cases. Our experiments on real-world datasets demonstrate superior accuracy and fas ter convergence for the proposed scheme compared to conventional approaches.
arXiv.org Artificial Intelligence
May-14-2025
- Country:
- North America > United States > California > San Diego County > San Diego (0.04)
- Genre:
- Research Report (0.50)
- Technology: