A Closer Look at Personalized Fine-Tuning in Heterogeneous Federated Learning
Chen, Minghui, Ghoukasian, Hrad, Jin, Ruinan, Wang, Zehua, Karimireddy, Sai Praneeth, Li, Xiaoxiao
Federated Learning (FL) enables decentralized, privacy-preserving model training but struggles to balance global generalization and local personalization due to non-identical data distributions across clients. Personalized Fine-Tuning (PFT), a popular post-hoc solution, fine-tunes the final global model locally but often overfits to skewed client distributions or fails under domain shifts. We propose adapting Linear Probing followed by full Fine-Tuning (LP-FT), a principled centralized strategy for alleviating feature distortion (Kumar et al., 2022), to the FL setting. Through systematic evaluation across seven datasets and six PFT variants, we demonstrate LP-FT's superiority in balancing personalization and generalization. Our analysis uncovers federated feature distortion, a phenomenon where local fine-tuning destabilizes globally learned features, and theoretically characterizes how LP-FT mitigates this via phased parameter updates. We further establish conditions (e.g., partial feature overlap, covariate-concept shift) under which LP-FT outperforms standard fine-tuning, offering actionable guidelines for deploying robust personalization in FL.
Nov-18-2025
- Country:
- Europe > Romania
- North America
- Canada
- British Columbia (0.04)
- Ontario > Hamilton (0.04)
- United States
- California (0.14)
- Virginia (0.04)
- Canada
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Education (0.68)
- Health & Medicine > Diagnostic Medicine
- Imaging (0.67)
- Technology: