CLEVA: Chinese Language Models EVAluation Platform
Li, Yanyang, Zhao, Jianqiao, Zheng, Duo, Hu, Zi-Yuan, Chen, Zhi, Su, Xiaohui, Huang, Yongfeng, Huang, Shijia, Lin, Dahua, Lyu, Michael R., Wang, Liwei
–arXiv.org Artificial Intelligence
With the continuous emergence of Chinese Large Language Models (LLMs), how to evaluate a model's capabilities has become an increasingly significant issue. The absence of a comprehensive Chinese benchmark that thoroughly assesses a model's performance, the unstandardized and incomparable prompting procedure, and the prevalent risk of contamination pose major challenges in the current evaluation of Chinese LLMs. We present CLEVA, a user-friendly platform crafted to holistically evaluate Chinese LLMs. Our platform employs a standardized workflow to assess LLMs' performance across various dimensions, regularly updating a competitive leaderboard. To alleviate contamination, CLEVA curates a significant proportion of new data and develops a sampling strategy that guarantees a unique subset for each leaderboard round. Empowered by an easy-to-use interface that requires just a few mouse clicks and a model API, users can conduct a thorough evaluation with minimal coding. Large-scale experiments featuring 23 Chinese LLMs have validated CLEVA's efficacy.
arXiv.org Artificial Intelligence
Oct-16-2023
- Country:
- Asia > China (0.68)
- Europe (1.00)
- North America > United States
- California (0.45)
- Washington > King County
- Seattle (0.14)
- Genre:
- Research Report (1.00)
- Industry:
- Education (0.67)
- Government > Regional Government (0.46)
- Information Technology > Security & Privacy (0.67)
- Leisure & Entertainment (0.67)
- Media (0.93)
- Technology: