Lv, Chuancheng
GATEAU: Selecting Influential Sample for Long Context Alignment
Si, Shuzheng, Zhao, Haozhe, Chen, Gang, Li, Yunshui, Luo, Kangyang, Lv, Chuancheng, An, Kaikai, Qi, Fanchao, Chang, Baobao, Sun, Maosong
Aligning large language models to handle instructions with extremely long contexts has yet to be fully investigated. Previous studies attempt to scale up the available data volume by synthesizing long instruction-following samples, as constructing such a dataset tends to be challenging for annotators. However, a lack of a well-defined strategy for ensuring data quality may introduce low-quality samples and restrict the model performance. Thus, we propose GATEAU, a novel framework to address the unique challenge of long context alignment by identifying the influential samples enriched with long-range dependency relations. Specifically, GATEAU measures the long-range dependencies from two essential aspects: the difficulty of generating target responses due to the long-range dependencies, and the difficulty of understanding long inputs due to such dependencies. Comprehensive experiments indicate that GATEAU effectively identifies influential samples and the model trained on these selected samples exhibits better instruction-following and long-context understanding capabilities.
C-Eval: A Multi-Level Multi-Discipline Chinese Evaluation Suite for Foundation Models
Huang, Yuzhen, Bai, Yuzhuo, Zhu, Zhihao, Zhang, Junlei, Zhang, Jinghan, Su, Tangjun, Liu, Junteng, Lv, Chuancheng, Zhang, Yikai, Lei, Jiayi, Fu, Yao, Sun, Maosong, He, Junxian
New NLP benchmarks are urgently needed to align with the rapid development of large language models (LLMs). We present C-Eval, the first comprehensive Chinese evaluation suite designed to assess advanced knowledge and reasoning abilities of foundation models in a Chinese context. C-Eval comprises multiple-choice questions across four difficulty levels: middle school, high school, college, and professional. The questions span 52 diverse disciplines, ranging from humanities to science and engineering. C-Eval is accompanied by C-Eval Hard, a subset of very challenging subjects in C-Eval that requires advanced reasoning abilities to solve. We conduct a comprehensive evaluation of the most advanced LLMs on C-Eval, including both English- and Chinese-oriented models. Results indicate that only GPT-4 could achieve an average accuracy of over 60%, suggesting that there is still significant room for improvement for current LLMs. We anticipate C-Eval will help analyze important strengths and shortcomings of foundation models, and foster their development and growth for Chinese users.