GPBench: A Comprehensive and Fine-Grained Benchmark for Evaluating Large Language Models as General Practitioners
Li, Zheqing, Yang, Yiying, Lang, Jiping, Jiang, Wenhao, Zhao, Yuhang, Li, Shuang, Wang, Dingqian, Lin, Zhu, Li, Xuanna, Tang, Yuze, Qiu, Jiexian, Lu, Xiaolin, Yu, Hongji, Chen, Shuang, Bi, Yuhua, Zeng, Xiaofei, Chen, Yixian, Chen, Junrong, Yao, Lin
–arXiv.org Artificial Intelligence
General practitioners (GPs) serve as the cornerstone of primary healthcare systems by providing continuous and comprehensive medical services. However, due to community-oriented nature of their practice, uneven training and resource gaps, the clinical proficiency among GPs can vary significantly across regions and healthcare settings. Currently, Large Language Models (LLMs) have demonstrated great potential in clinical and medical applications, making them a promising tool for supporting general practice. However, most existing benchmarks and evaluation frameworks focus on exam-style assessments-typically multiple-choice question-lack comprehensive assessment sets that accurately mirror the real-world scenarios encountered by GPs. To evaluate how effectively LLMs can make decisions in the daily work of GPs, we designed GPBench, which consists of both test questions from clinical practice and a novel evaluation framework. The test set includes multiple-choice questions that assess fundamental knowledge of general practice, as well as realistic, scenario-based problems. All questions are meticulously annotated by experts, incorporating rich fine-grained information related to clinical management. The proposed LLM evaluation framework is based on the competency model for general practice, providing a comprehensive methodology for assessing LLM performance in real-world settings. As the first large-model evaluation set targeting GP decision-making scenarios, GPBench allows us to evaluate current mainstream LLMs. Expert assessment and evaluation reveal that in areas such as disease staging, complication recognition, treatment detail, and medication usage, these models exhibit at least ten major shortcomings. Overall, existing LLMs are not yet suitable for independent use in real-world GP working scenarios without human oversight.
arXiv.org Artificial Intelligence
Mar-21-2025
- Country:
- Europe > Italy (0.14)
- North America > United States (0.14)
- Genre:
- Research Report
- Experimental Study (1.00)
- New Finding (1.00)
- Research Report
- Industry:
- Health & Medicine
- Consumer Health (1.00)
- Diagnostic Medicine (1.00)
- Health Care Providers & Services (1.00)
- Health Care Technology (0.69)
- Pharmaceuticals & Biotechnology (1.00)
- Therapeutic Area
- Cardiology/Vascular Diseases (1.00)
- Endocrinology > Diabetes (0.46)
- Gastroenterology (1.00)
- Immunology (0.67)
- Infections and Infectious Diseases (1.00)
- Internal Medicine (0.67)
- Nephrology (1.00)
- Pulmonary/Respiratory Diseases (0.67)
- Health & Medicine
- Technology: