Text-driven Prompt Generation for Vision-Language Models in Federated Learning
Qiu, Chen, Li, Xingyu, Mummadi, Chaithanya Kumar, Ganesh, Madan Ravi, Li, Zhenzhen, Peng, Lu, Lin, Wan-Yi
–arXiv.org Artificial Intelligence
Prompt learning for vision-language models, e.g., CoOp, has shown great success in adapting CLIP to different downstream tasks, making it a promising solution for federated learning due to computational reasons. Existing prompt learning techniques replace hand-crafted text prompts with learned vectors that offer improvements on seen classes, but struggle to generalize to unseen classes. Our work addresses this challenge by proposing Federated Text-driven Prompt Generation (FedTPG), which learns a unified prompt generation network across multiple remote clients in a scalable manner. The prompt generation network is conditioned on task-related text input, thus is context-aware, making it suitable to generalize for both seen and unseen classes. Our comprehensive empirical evaluations on nine diverse image classification datasets show that our method is superior to existing federated prompt learning methods, that achieve overall better generalization on both seen and unseen classes and is also generalizable to unseen datasets. Vision-language models have recently emerged as a transformative technology for machine learning applications. Seminal contributions like Contrastive Language-Image Pretraining (CLIP) Radford et al. (2021) have demonstrated unprecedented capabilities in diverse image classification tasks. Different classification methods often leverage manually-engineered text prompts, such as "a photo of a [class]," to utilize CLIP's rich semantic features (Jia et al., 2021). CLIP has shown its robustness and versatility in handling a wide range of image distributions.
arXiv.org Artificial Intelligence
Oct-9-2023