Federated Learning under Covariate Shifts with Generalization Guarantees
Ramezani-Kebrya, Ali, Liu, Fanghui, Pethick, Thomas, Chrysos, Grigorios, Cevher, Volkan
–arXiv.org Artificial Intelligence
To handle covariate shifts, we formulate a new global model training paradigm and propose Federated Importance-Weighted Empirical Risk Minimization (FTW-ERM) along with improving density ratio matching methods without requiring perfect knowledge of the supremum over true ratios. We also propose the communication-efficient variant FITW-ERM with the same level of privacy guarantees as those of classical ERM in FL. We theoretically show that FTW-ERM achieves smaller generalization error than classical ERM under certain settings. Experimental results demonstrate the superiority of FTW-ERM over existing FL baselines in challenging imbalanced federated settings in terms of data distribution shifts across clients.
arXiv.org Artificial Intelligence
Jun-8-2023