CoD, Towards an Interpretable Medical Agent using Chain of Diagnosis
Chen, Junying, Gui, Chi, Gao, Anningzhe, Ji, Ke, Wang, Xidong, Wan, Xiang, Wang, Benyou
–arXiv.org Artificial Intelligence
The field of medical diagnosis has undergone a significant transformation with the advent of large language models (LLMs), yet the challenges of interpretability within these models remain largely unaddressed. This study introduces Chain-of-Diagnosis (CoD) to enhance the interpretability of LLM-based medical diagnostics. CoD transforms the diagnostic process into a diagnostic chain that mirrors a physician's thought process, providing a transparent reasoning pathway. Additionally, CoD outputs the disease confidence distribution to ensure transparency in decisionmaking. This interpretability makes model diagnostics controllable and aids in identifying critical symptoms for inquiry through the entropy reduction of confidences. With CoD, we developed DiagnosisGPT, capable of diagnosing 9,604 diseases. Experimental results demonstrate that DiagnosisGPT outperforms other LLMs on diagnostic benchmarks. Moreover, DiagnosisGPT provides interpretability while ensuring controllability in diagnostic rigor.
arXiv.org Artificial Intelligence
Jul-18-2024
- Country:
- Asia > China (0.28)
- North America > United States
- Texas (0.14)
- Genre:
- Research Report > New Finding (0.48)
- Industry:
- Technology: