Vision-Language Model Fine-Tuning via Simple Parameter-Efficient Modification
Li, Ming, Zhong, Jike, Li, Chenxin, Li, Liuzhuozheng, Lin, Nie, Sugiyama, Masashi
–arXiv.org Artificial Intelligence
Recent advances in fine-tuning Vision-Language Models (VLMs) have witnessed the success of prompt tuning and adapter tuning, while the classic model fine-tuning on inherent parameters seems to be overlooked. It is believed that fine-tuning the parameters of VLMs with few-shot samples corrupts the pre-trained knowledge since fine-tuning the CLIP model even degrades performance. In this paper, we revisit this viewpoint, and propose a new perspective: fine-tuning the specific parameters instead of all will uncover the power of classic model fine-tuning on VLMs. Through our meticulous study, we propose ClipFit, a simple yet effective method to fine-tune CLIP without introducing any overhead of extra parameters. We demonstrate that by only fine-tuning the specific bias terms and normalization layers, ClipFit can improve the performance of zero-shot CLIP by 7.27\% average harmonic mean accuracy. Lastly, to understand how fine-tuning in CLIPFit affects the pre-trained models, we conducted extensive experimental analyses w.r.t. changes in internal parameters and representations. We found that low-level text bias layers and the first layer normalization layer change much more than other layers. The code is available at \url{https://github.com/minglllli/CLIPFit}.
arXiv.org Artificial Intelligence
Nov-19-2024
- Country:
- Asia
- Japan > Honshū (0.28)
- Middle East (0.28)
- Europe > Switzerland (0.28)
- Asia
- Genre:
- Research Report (0.81)
- Technology: