From BERT to LLMs: Comparing and Understanding Chinese Classifier Prediction in Language Models
Zhang, Ziqi, Ma, Jianfei, Chersoni, Emmanuele, You, Jieshun, Feng, Zhaoxin
–arXiv.org Artificial Intelligence
Classifiers are an important and defining feature of the Chinese language, and their correct prediction is key to numerous educational applications. Yet, whether the most popular Large Language Models (LLMs) possess proper knowledge the Chinese classifiers is an issue that has largely remain unexplored in the Natural Language Processing (NLP) literature. To address such a question, we employ various masking strategies to evaluate the LLMs' intrinsic ability, the contribution of different sentence elements, and the working of the attention mechanisms during prediction. Besides, we explore fine-tuning for LLMs to enhance the classifier performance. Our findings reveal that LLMs perform worse than BERT, even with fine-tuning. The prediction, as expected, greatly benefits from the information about the following noun, which also explains the advantage of models with a bidirectional attention mechanism such as BERT.
arXiv.org Artificial Intelligence
Nov-4-2025
- Country:
- Asia
- Europe
- Austria > Vienna (0.14)
- Netherlands > North Holland
- Amsterdam (0.04)
- Spain > Catalonia
- Barcelona Province > Barcelona (0.04)
- United Kingdom
- England > Cambridgeshire
- Cambridge (0.14)
- Scotland > City of Aberdeen
- Aberdeen (0.04)
- England > Cambridgeshire
- North America
- Dominican Republic (0.04)
- United States
- California (0.04)
- Florida > Miami-Dade County
- Miami (0.04)
- Louisiana > Orleans Parish
- New Orleans (0.04)
- Michigan (0.04)
- Minnesota > Hennepin County
- Minneapolis (0.14)
- Genre:
- Research Report > New Finding (1.00)
- Technology: