Goto

Collaborating Authors

 Martin, Cheryl


A Study of Generative Large Language Model for Medical Research and Healthcare

arXiv.org Artificial Intelligence

There is enormous enthusiasm and concerns in using large language models (LLMs) in healthcare, yet current assumptions are all based on general-purpose LLMs such as ChatGPT. This study develops a clinical generative LLM, GatorTronGPT, using 277 billion words of mixed clinical and English text with a GPT-3 architecture of 20 billion parameters. GatorTronGPT improves biomedical natural language processing for medical research. Synthetic NLP models trained using GatorTronGPT generated text outperform NLP models trained using real-world clinical text. Physicians Turing test using 1 (worst) to 9 (best) scale shows that there is no significant difference in linguistic readability (p = 0.22; 6.57 of GatorTronGPT compared with 6.93 of human) and clinical relevance (p = 0.91; 7.0 of GatorTronGPT compared with 6.97 of human) and that physicians cannot differentiate them (p < 0.001). This study provides insights on the opportunities and challenges of LLMs for medical research and healthcare.


GatorTron: A Large Clinical Language Model to Unlock Patient Information from Unstructured Electronic Health Records

arXiv.org Artificial Intelligence

There is an increasing interest in developing artificial intelligence (AI) systems to process and interpret electronic health records (EHRs). Natural language processing (NLP) powered by pretrained language models is the key technology for medical AI systems utilizing clinical narratives. However, there are few clinical language models, the largest of which trained in the clinical domain is comparatively small at 110 million parameters (compared with billions of parameters in the general domain). It is not clear how large clinical language models with billions of parameters can help medical AI systems utilize unstructured EHRs. In this study, we develop from scratch a large clinical language model - GatorTron - using >90 billion words of text (including >82 billion words of de-identified clinical text) and systematically evaluate it on 5 clinical NLP tasks including clinical concept extraction, medical relation extraction, semantic textual similarity, natural language inference (NLI), and medical question answering (MQA). We examine how (1) scaling up the number of parameters and (2) scaling up the size of the training data could benefit these NLP tasks. GatorTron models scale up the clinical language model from 110 million to 8.9 billion parameters and improve 5 clinical NLP tasks (e.g., 9.6% and 9.5% improvement in accuracy for NLI and MQA), which can be applied to medical AI systems to improve healthcare delivery. The GatorTron models are publicly available at: https://catalog.ngc.nvidia.com/orgs/nvidia/teams/clara/models/gatortron_og.


Crowdsourcing Evaluations of Classifier Interpretability

AAAI Conferences

This paper presents work using crowdsourcing to assess explanations for supervised text classification. In this paper, an explanation is defined to be a set of words from the input text that a classifier or human believes to be most useful for making a classification decision. We compared two types of explanations for classification decisions: human-generated and computer-generated. The comparison is based on whether the type of the explanation was identifiable and on which type of explanation was preferred. Crowdsourcing was used to collect two types of data for these experiments. First, human-generated explanations were collected by having users select an appropriate category for a piece of text and highlight words that best support this category. Second, users were asked to compare human- and computer-generated explanations and indicate which they preferred and why. The crowdsourced data used for this paper was collected primarily via Amazon’s Mechanical Turk, using several quality control methods. We found that in one test corpus, the two explanation types were virtually indistinguishable, and that participants did not have a significant preference for one type over another. For another corpus, the explanations were slightly more distinguishable, and participants preferred the computer-generated explanations at a small, but statistically significant, level. We conclude that computer-generated explanations for text classification can be comparable in quality to human-generated explanations.


The 2004 AAAI Spring Symposium Series

AI Magazine

The Association for the Advancement of Artificial Intelligence, in cooperation with Stanford University's Department of Computer Science, presented the 2004 Spring Symposium Series, Monday through Wednesday, March 22-24, at Stanford University. The titles of the eight symposia were (1) Accessible Hands-on Artificial Intelligence and Robotics Education; (2) Architectures for Modeling Emotion: Cross-Disciplinary Foundations; (3) Bridging the Multiagent and Multirobotic Research Gap; (4) Exploring Attitude and Affect in Text: Theories and Applications; (5) Interaction between Humans and Autonomous Systems over Extended Operation; (6) Knowledge Representation and Ontologies for Autonomous Systems; (7) Language Learning: An Interdisciplinary Perspective; and (8) Semantic Web Services. Most symposia chairs elected to create AAAI technical reports of their symposium, which are available as paperbound reports or (for AAAI members) are downloadable on the AAAI members-only Web site. This report includes summaries of the eight symposia, written by the symposia chairs.


The 2004 AAAI Spring Symposium Series

AI Magazine

The Association for the Advancement of Artificial Intelligence, in cooperation with Stanford University's Department of Computer Science, presented the 2004 Spring Symposium Series, Monday through Wednesday, March 22-24, at Stanford University. The titles of the eight symposia were (1) Accessible Hands-on Artificial Intelligence and Robotics Education; (2) Architectures for Modeling Emotion: Cross-Disciplinary Foundations; (3) Bridging the Multiagent and Multirobotic Research Gap; (4) Exploring Attitude and Affect in Text: Theories and Applications; (5) Interaction between Humans and Autonomous Systems over Extended Operation; (6) Knowledge Representation and Ontologies for Autonomous Systems; (7) Language Learning: An Interdisciplinary Perspective; and (8) Semantic Web Services. Each symposium had limited attendance. Most symposia chairs elected to create AAAI technical reports of their symposium, which are available as paperbound reports or (for AAAI members) are downloadable on the AAAI members-only Web site. This report includes summaries of the eight symposia, written by the symposia chairs.