Narayanan, Sanjana
Assessing the Limitations of Large Language Models in Clinical Fact Decomposition
Munnangi, Monica, Swaminathan, Akshay, Fries, Jason Alan, Jindal, Jenelle, Narayanan, Sanjana, Lopez, Ivan, Tu, Lucia, Chung, Philip, Omiye, Jesutofunmi A., Kashyap, Mehr, Shah, Nigam
Verifying factual claims is critical for using large language models (LLMs) in healthcare. Recent work has proposed fact decomposition, which uses LLMs to rewrite source text into concise sentences conveying a single piece of information, as an approach for fine-grained fact verification. Clinical documentation poses unique challenges for fact decomposition due to dense terminology and diverse note types. To explore these challenges, we present FactEHR, a dataset consisting of full document fact decompositions for 2,168 clinical notes spanning four types from three hospital systems. Our evaluation, including review by clinicians, highlights significant variability in the quality of fact decomposition for four commonly used LLMs, with some LLMs generating 2.6x more facts per sentence than others. The results underscore the need for better LLM capabilities to support factual verification in clinical text. To facilitate future research in this direction, we plan to release our code at \url{https://github.com/som-shahlab/factehr}.
Prediction-focused Mixture Models
Narayanan, Sanjana, Sharma, Abhishek, Zeng, Catherine, Doshi-Velez, Finale
In several applications, besides getting a generative model of the data, we also want the model to be useful for specific downstream tasks. Mixture models are useful for identifying discrete components in the data, but may not identify components useful for downstream tasks if misspecified; further, current inference techniques often fail to overcome misspecification even when a supervisory signal is provided. We introduce the prediction-focused mixture model, which selects and models input features relevant to predicting the targets. We demonstrate that our approach identifies relevant signal from inputs even when the model is highly misspecified.