Do BERT-Like Bidirectional Models Still Perform Better on Text Classification in the Era of LLMs?
Zhang, Junyan, Huang, Yiming, Liu, Shuliang, Gao, Yubo, Hu, Xuming
–arXiv.org Artificial Intelligence
The rapid adoption of LLMs has overshadowed the potential advantages of traditional BERT-like models in text classification. This study challenges the prevailing "LLM-centric" trend by systematically comparing three category methods, i.e., BERT-like models fine-tuning, LLM internal state utilization, and zero-shot inference across six high-difficulty datasets. Our findings reveal that BERT-like models often outperform LLMs. We further categorize datasets into three types, perform PCA and probing experiments, and identify task-specific model strengths: BERT-like models excel in pattern-driven tasks, while LLMs dominate those requiring deep semantics or world knowledge. Based on this, we propose TaMAS, a fine-grained task selection strategy, advocating for a nuanced, task-driven approach over a one-size-fits-all reliance on LLMs.
arXiv.org Artificial Intelligence
May-27-2025
- Country:
- Asia > China
- Beijing > Beijing (0.05)
- Guangdong Province > Guangzhou (0.04)
- Hong Kong (0.04)
- Atlantic Ocean (0.04)
- Europe > Greece (0.05)
- Oceania
- Australia > Queensland (0.04)
- Palau (0.05)
- Pacific Ocean > South Pacific Ocean
- Coral Sea (0.04)
- Asia > China
- Genre:
- Research Report > New Finding (0.67)
- Technology: