data heterogeneity
- Asia > China > Hong Kong (0.04)
- North America > United States > Virginia (0.04)
- North America > United States > California > Orange County > Anaheim (0.04)
- (2 more...)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (1.00)
- Education (0.67)
- North America > United States > Virginia (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- Europe > Switzerland > Vaud > Lausanne (0.04)
- (2 more...)
- Information Technology (0.46)
- Energy (0.46)
- North America > United States > Ohio > Franklin County > Columbus (0.04)
- North America > United States > New York > Monroe County > Rochester (0.04)
- North America > United States > Michigan > Wayne County > Dearborn (0.14)
- North America > United States > Georgia > Clarke County > Athens (0.14)
- North America > United States > Virginia (0.04)
- (2 more...)
- North America > United States > Virginia (0.04)
- Asia > Singapore (0.04)
- Asia > Middle East > Jordan (0.04)
- (2 more...)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (1.00)
- North America > Canada > Ontario > Toronto (0.14)
- North America > United States > Virginia (0.04)
- North America > United States > Ohio (0.04)
- (5 more...)
- Health & Medicine (1.00)
- Information Technology > Security & Privacy (0.46)
One-shot Federated Learning via Synthetic Distiller-Distillate Communication
One-shot Federated learning (FL) is a powerful technology facilitating collaborative training of machine learning models in a single round of communication. While its superiority lies in communication efficiency and privacy preservation compared to iterative FL, one-shot FL often compromises model performance. Prior research has primarily focused on employing data-free knowledge distillation to optimize data generators and ensemble models for better aggregating local knowledge into the server model. However, these methods typically struggle with data heterogeneity, where inconsistent local data distributions can cause teachers to provide misleading knowledge. Additionally, they may encounter scalability issues with complex datasets due to inherent two-step information loss: first, during local training (from data to model), and second, when transferring knowledge to the server model (from model to inversed data). In this paper, we propose FedSD2C, a novel and practical one-shot FL framework designed to address these challenges. FedSD2C introduces a distiller to synthesize informative distillates directly from local data to reduce information loss and proposes sharing synthetic distillates instead of inconsistent local models to tackle data heterogeneity. Our empirical results demonstrate that FedSD2C consistently outperforms other one-shot FL methods with more complex and real datasets, achieving up to 2.6 $\times$ the performance of the best baseline.
FedFed: Feature Distillation against Data Heterogeneity in Federated Learning
Federated learning (FL) typically faces data heterogeneity, i.e., distribution shifting among clients. Sharing clients' information has shown great potentiality in mitigating data heterogeneity, yet incurs a dilemma in preserving privacy and promoting model performance. To alleviate the dilemma, we raise a fundamental question: Is it possible to share partial features in the data to tackle data heterogeneity?In this work, we give an affirmative answer to this question by proposing a novel approach called Fed