Magoc, Tanja
A Study of Generative Large Language Model for Medical Research and Healthcare
Peng, Cheng, Yang, Xi, Chen, Aokun, Smith, Kaleb E, PourNejatian, Nima, Costa, Anthony B, Martin, Cheryl, Flores, Mona G, Zhang, Ying, Magoc, Tanja, Lipori, Gloria, Mitchell, Duane A, Ospina, Naykky S, Ahmed, Mustafa M, Hogan, William R, Shenkman, Elizabeth A, Guo, Yi, Bian, Jiang, Wu, Yonghui
There is enormous enthusiasm and concerns in using large language models (LLMs) in healthcare, yet current assumptions are all based on general-purpose LLMs such as ChatGPT. This study develops a clinical generative LLM, GatorTronGPT, using 277 billion words of mixed clinical and English text with a GPT-3 architecture of 20 billion parameters. GatorTronGPT improves biomedical natural language processing for medical research. Synthetic NLP models trained using GatorTronGPT generated text outperform NLP models trained using real-world clinical text. Physicians Turing test using 1 (worst) to 9 (best) scale shows that there is no significant difference in linguistic readability (p = 0.22; 6.57 of GatorTronGPT compared with 6.93 of human) and clinical relevance (p = 0.91; 7.0 of GatorTronGPT compared with 6.97 of human) and that physicians cannot differentiate them (p < 0.001). This study provides insights on the opportunities and challenges of LLMs for medical research and healthcare.
Identifying Symptoms of Delirium from Clinical Narratives Using Natural Language Processing
Chen, Aokun, Paredes, Daniel, Yu, Zehao, Lou, Xiwei, Brunson, Roberta, Thomas, Jamie N., Martinez, Kimberly A., Lucero, Robert J., Magoc, Tanja, Solberg, Laurence M., Snigurska, Urszula A., Ser, Sarah E., Prosperi, Mattia, Bian, Jiang, Bjarnadottir, Ragnhildur I., Wu, Yonghui
Delirium is an acute decline or fluctuation in attention, awareness, or other cognitive function that can lead to serious adverse outcomes. Despite the severe outcomes, delirium is frequently unrecognized and uncoded in patients' electronic health records (EHRs) due to its transient and diverse nature. Natural language processing (NLP), a key technology that extracts medical concepts from clinical narratives, has shown great potential in studies of delirium outcomes and symptoms. To assist in the diagnosis and phenotyping of delirium, we formed an expert panel to categorize diverse delirium symptoms, composed annotation guidelines, created a delirium corpus with diverse delirium symptoms, and developed NLP methods to extract delirium symptoms from clinical notes. We compared 5 state-of-the-art transformer models including 2 models (BERT and RoBERTa) from the general domain and 3 models (BERT_MIMIC, RoBERTa_MIMIC, and GatorTron) from the clinical domain. GatorTron achieved the best strict and lenient F1 scores of 0.8055 and 0.8759, respectively. We conducted an error analysis to identify challenges in annotating delirium symptoms and developing NLP systems. To the best of our knowledge, this is the first large language model-based delirium symptom extraction system. Our study lays the foundation for the future development of computable phenotypes and diagnosis methods for delirium.
GatorTron: A Large Clinical Language Model to Unlock Patient Information from Unstructured Electronic Health Records
Yang, Xi, Chen, Aokun, PourNejatian, Nima, Shin, Hoo Chang, Smith, Kaleb E, Parisien, Christopher, Compas, Colin, Martin, Cheryl, Flores, Mona G, Zhang, Ying, Magoc, Tanja, Harle, Christopher A, Lipori, Gloria, Mitchell, Duane A, Hogan, William R, Shenkman, Elizabeth A, Bian, Jiang, Wu, Yonghui
There is an increasing interest in developing artificial intelligence (AI) systems to process and interpret electronic health records (EHRs). Natural language processing (NLP) powered by pretrained language models is the key technology for medical AI systems utilizing clinical narratives. However, there are few clinical language models, the largest of which trained in the clinical domain is comparatively small at 110 million parameters (compared with billions of parameters in the general domain). It is not clear how large clinical language models with billions of parameters can help medical AI systems utilize unstructured EHRs. In this study, we develop from scratch a large clinical language model - GatorTron - using >90 billion words of text (including >82 billion words of de-identified clinical text) and systematically evaluate it on 5 clinical NLP tasks including clinical concept extraction, medical relation extraction, semantic textual similarity, natural language inference (NLI), and medical question answering (MQA). We examine how (1) scaling up the number of parameters and (2) scaling up the size of the training data could benefit these NLP tasks. GatorTron models scale up the clinical language model from 110 million to 8.9 billion parameters and improve 5 clinical NLP tasks (e.g., 9.6% and 9.5% improvement in accuracy for NLI and MQA), which can be applied to medical AI systems to improve healthcare delivery. The GatorTron models are publicly available at: https://catalog.ngc.nvidia.com/orgs/nvidia/teams/clara/models/gatortron_og.