DeepReview: Improving LLM-based Paper Review with Human-like Deep Thinking Process
Zhu, Minjun, Weng, Yixuan, Yang, Linyi, Zhang, Yue
–arXiv.org Artificial Intelligence
Large Language Models (LLMs) are increasingly utilized in scientific research assessment, particularly in automated paper review. However, existing LLMbased review systems face significant challenges, including limited domain expertise, hallucinated reasoning, and a lack of structured evaluation. To address these limitations, we introduce DeepReview, a multi-stage framework designed to emulate expert reviewers by incorporating structured analysis, literature retrieval, and evidence-based argumentation. Using DeepReview-13K, a curated dataset with structured annotations, we train DeepReviewer-14B, which outperforms CycleReviewer-70B with fewer tokens. In its best mode, DeepReviewer-14B achieves win rates of 88.21% and 80.20% against GPT-o1 and DeepSeek-R1 in evaluations. Our work sets a new benchmark for LLM-based paper review, with all resources publicly available. The code, model, dataset and demo have be released in http://ai-researcher.net.
arXiv.org Artificial Intelligence
Mar-11-2025
- Country:
- Asia (0.46)
- Europe (0.28)
- North America > United States (0.28)
- Genre:
- Overview (0.93)
- Research Report > New Finding (1.00)
- Industry:
- Information Technology (0.68)
- Technology: