SurveyEval: Towards Comprehensive Evaluation of LLM-Generated Academic Surveys
Zhao, Jiahao, Zhang, Shuaixing, Xu, Nan, Wang, Lei
–arXiv.org Artificial Intelligence
LLM-based automatic survey systems are transforming how users acquire information from the web by integrating retrieval, organization, and content synthesis into end-to-end generation pipelines. While recent works focus on developing new generation pipelines, how to evaluate such complex systems remains a significant challenge. To this end, we introduce SurveyEval, a comprehensive benchmark that evaluates automatically generated surveys across three dimensions: overall quality, outline coherence, and reference accuracy. We extend the evaluation across 7 subjects and augment the LLM-as-a-Judge framework with human references to strengthen evaluation-human alignment. Evaluation results show that while general long-text or paper-writing systems tend to produce lower-quality surveys, specialized survey-generation systems are able to deliver substantially higher-quality results. We envision SurveyEval as a scalable testbed to understand and improve automatic survey systems across diverse subjects and evaluation criteria.
arXiv.org Artificial Intelligence
Dec-3-2025
- Country:
- Asia > China
- North America > United States
- New York > New York County > New York City (0.04)
- Genre:
- Research Report (0.70)
- Technology: