Unlabeled Data vs. Pre-trained Knowledge: Rethinking SSL in the Era of Large Models
Lv, Song-Lin, Zhu, Rui, Wei, Tong, Li, Yu-Feng, Guo, Lan-Zhe
–arXiv.org Artificial Intelligence
Semi-supervised learning (SSL) alleviates the cost of data labeling process by exploiting unlabeled data and has achieved promising results. Meanwhile, with the development of large foundation models, exploiting pre-trained models becomes a promising way to address the label scarcity in the downstream tasks, such as various parameter-efficient fine-tuning techniques. This raises a natural yet critical question: When labeled data is limited, should we rely on unlabeled data or pre-trained models? To investigate this issue, we conduct a fair comparison between SSL methods and pre-trained models (e.g., CLIP) on representative image classification tasks under a controlled supervision budget. Experiments reveal that SSL has met its ``Waterloo" in the era of large models, as pre-trained models show both high efficiency and strong performance on widely adopted SSL benchmarks. This underscores the urgent need for SSL researchers to explore new avenues, such as deeper integration between the SSL and pre-trained models. Furthermore, we investigate the potential of Multi-Modal Large Language Models (MLLMs) in image classification tasks. Results show that, despite their massive parameter scales, MLLMs still face significant performance limitations, highlighting that even a seemingly well-studied task remains highly challenging.
arXiv.org Artificial Intelligence
Oct-28-2025
- Country:
- Asia > China
- Jiangsu Province > Nanjing (0.04)
- Atlantic Ocean > North Atlantic Ocean
- Chesapeake Bay (0.04)
- Europe
- Italy > Calabria
- Catanzaro Province > Catanzaro (0.04)
- United Kingdom > England
- Staffordshire (0.04)
- Italy > Calabria
- North America > United States
- Asia > China
- Genre:
- Research Report > New Finding (0.48)
- Technology: