A Continued Pretrained LLM Approach for Automatic Medical Note Generation
Yuan, Dong, Rastogi, Eti, Naik, Gautam, Rajagopal, Sree Prasanna, Goyal, Sagar, Zhao, Fen, Chintagunta, Bharath, Ward, Jeff
–arXiv.org Artificial Intelligence
LLMs are revolutionizing NLP tasks. However, the use of the most advanced LLMs, such as GPT-4, is often prohibitively expensive for most specialized fields. We introduce HEAL, the first continuously trained 13B LLaMA2-based LLM that is purpose-built for medical conversations and measured on automated scribing. Our results demonstrate that HEAL outperforms GPT-4 and PMC-LLaMA in PubMedQA, with an accuracy of 78.4\%. It also achieves parity with GPT-4 in generating medical notes. Remarkably, HEAL surpasses GPT-4 and Med-PaLM 2 in identifying more correct medical concepts and exceeds the performance of human scribes and other comparable models in correctness and completeness.
arXiv.org Artificial Intelligence
Apr-3-2024
- Country:
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Genre:
- Research Report > New Finding (0.87)
- Industry:
- Health & Medicine (1.00)
- Technology: