Large Language Models Perform Diagnostic Reasoning
Wu, Cheng-Kuang, Chen, Wei-Lin, Chen, Hsin-Hsi
–arXiv.org Artificial Intelligence
We explore the extension of chain-of-thought (CoT) prompting to medical reasoning for the task of automatic diagnosis. Motivated by doctors' underlying reasoning process, we present Diagnostic-Reasoning CoT (DR-CoT). Empirical results demonstrate that by simply prompting large language models trained only on general text corpus with two DR-CoT exemplars, the diagnostic accuracy improves by 15% comparing to standard prompting. Moreover, the gap reaches a pronounced 18% in out-domain settings. Our findings suggest expert-knowledge reasoning in large language models can be elicited through proper promptings.
arXiv.org Artificial Intelligence
Jul-17-2023
- Country:
- Asia (0.28)
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Health & Medicine
- Diagnostic Medicine (1.00)
- Therapeutic Area
- Immunology (1.00)
- Infections and Infectious Diseases (1.00)
- Neurology (1.00)
- Otolaryngology (1.00)
- Pulmonary/Respiratory Diseases (1.00)
- Health & Medicine
- Technology: