Benchmarking Next-Generation Reasoning-Focused Large Language Models in Ophthalmology: A Head-to-Head Evaluation on 5,888 Items
Zou, Minjie, Srinivasan, Sahana, Lo, Thaddaeus Wai Soon, Zou, Ke, Yang, Gabriel Dawei, Ai, Xuguang, Kim, Hyunjae, Singer, Maxwell, Antaki, Fares, Li, Kelvin, Chang, Robert, Tan, Marcus, Chen, David Ziyou, Liu, Dianbo, Chen, Qingyu, Tham, Yih Chung
–arXiv.org Artificial Intelligence
Recent advances in reasoning-focused large language models (LLMs) mark a shift from general LLMs toward models designed for complex decision-making, a crucial aspect in medicine. However, their performance in specialized domains like ophthalmology remains underexplored. This study comprehensively evaluated and compared the accuracy and reasoning capabilities of four newly developed reasoning-focused LLMs, namely DeepSeek-R1, OpenAI o1, o3-mini, and Gemini 2.0 Flash-Thinking. Each model was assessed using 5,888 multiple-choice ophthalmology exam questions from the MedMCQA dataset in zero-shot setting. Quantitative evaluation included accuracy, Macro-F1, and five text-generation metrics (ROUGE-L, METEOR, BERTScore, BARTScore, and AlignScore), computed against ground-truth reasonings. Average inference time was recorded for a subset of 100 randomly selected questions. Additionally, two board-certified ophthalmologists qualitatively assessed clarity, completeness, and reasoning structure of responses to differential diagnosis questions.O1 (0.902) and DeepSeek-R1 (0.888) achieved the highest accuracy, with o1 also leading in Macro-F1 (0.900). The performance of models across the text-generation metrics varied: O3-mini excelled in ROUGE-L (0.151), o1 in METEOR (0.232), DeepSeek-R1 and o3-mini tied for BERTScore (0.673), DeepSeek-R1 (-4.105) and Gemini 2.0 Flash-Thinking (-4.127) performed best in BARTScore, while o3-mini (0.181) and o1 (0.176) led AlignScore. Inference time across the models varied, with DeepSeek-R1 being slowest (40.4 seconds) and Gemini 2.0 Flash-Thinking fastest (6.7 seconds). Qualitative evaluation revealed that DeepSeek-R1 and Gemini 2.0 Flash-Thinking tended to provide detailed and comprehensive intermediate reasoning, whereas o1 and o3-mini displayed concise and summarized justifications.
arXiv.org Artificial Intelligence
Apr-16-2025
- Country:
- Asia
- China > Hong Kong (0.04)
- Myanmar > Tanintharyi Region
- Dawei (0.04)
- Singapore > Central Region
- Singapore (0.04)
- North America
- Canada > Quebec
- Montreal (0.04)
- United States
- California > Santa Clara County
- Stanford (0.04)
- Ohio > Cuyahoga County
- Cleveland (0.04)
- California > Santa Clara County
- Canada > Quebec
- Asia
- Genre:
- Research Report
- Experimental Study (1.00)
- New Finding (0.94)
- Research Report
- Industry:
- Technology: