Probabilistic Federated Prompt-Tuning with Non-IID and Imbalanced Data
Weng, Pei-Yau, Hoang, Minh, Nguyen, Lam M., Thai, My T., Weng, Tsui-Wei, Hoang, Trong Nghia
–arXiv.org Artificial Intelligence
Fine-tuning pre-trained models is a popular approach in machine learning for solving complex tasks with moderate data. However, fine-tuning the entire pre-trained model is ineffective in federated data scenarios where local data distributions are diversely skewed. To address this, we explore integrating federated learning with a more effective prompt-tuning method, optimizing for a small set of input prefixes to reprogram the pre-trained model's behavior. Our approach transforms federated learning into a distributed set modeling task, aggregating diverse sets of prompts to globally fine-tune the pre-trained model. We benchmark various baselines based on direct adaptations of existing federated model aggregation techniques and introduce a new probabilistic prompt aggregation method that substantially outperforms these baselines. Our reported results on a variety of computer vision datasets confirm that the proposed method is most effective to combat extreme data heterogeneity in federated learning.
arXiv.org Artificial Intelligence
Feb-26-2025
- Country:
- North America > United States
- California (0.28)
- Minnesota > Hennepin County
- Minneapolis (0.14)
- North America > United States
- Genre:
- Research Report
- Experimental Study (0.93)
- New Finding (0.67)
- Research Report
- Industry:
- Information Technology > Security & Privacy (0.67)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning > Neural Networks (1.00)
- Representation & Reasoning > Optimization (0.67)
- Vision (1.00)
- Information Technology > Artificial Intelligence