Dieterich, Christoph
Arbitrary Data as Images: Fusion of Patient Data Across Modalities and Irregular Intervals with Vision Transformers
Tölle, Malte, Scharaf, Mohamad, Fischer, Samantha, Reich, Christoph, Zeid, Silav, Dieterich, Christoph, Meder, Benjamin, Frey, Norbert, Wild, Philipp, Engelhardt, Sandy
A patient undergoes multiple examinations in each hospital stay, where each provides different facets of the health status. These assessments include temporal data with varying sampling rates, discrete single-point measurements, therapeutic interventions such as medication administration, and images. While physicians are able to process and integrate diverse modalities intuitively, neural networks need specific modeling for each modality complicating the training procedure. We demonstrate that this complexity can be significantly reduced by visualizing all information as images along with unstructured text and subsequently training a conventional vision-text transformer. Our approach, Vision Transformer for irregular sampled Multi-modal Measurements (ViTiMM), not only simplifies data preprocessing and modeling but also outperforms current state-of-the-art methods in predicting in-hospital mortality and phenotyping, as evaluated on 6,175 patients from the MIMIC-IV dataset. The modalities include patient's clinical measurements, medications, X-ray images, and electrocardiography scans. We hope our work inspires advancements in multi-modal medical AI by reducing the training complexity to (visual) prompt engineering, thus lowering entry barriers and enabling no-code solutions for training. The source code will be made publicly available.
Clinical information extraction for Low-resource languages with Few-shot learning using Pre-trained language models and Prompting
Richter-Pechanski, Phillip, Wiesenbach, Philipp, Schwab, Dominic M., Kiriakou, Christina, Geis, Nicolas, Dieterich, Christoph, Frank, Anette
Automatic extraction of medical information from these data poses several challenges: high costs of required clinical expertise, restricted computational resources, strict privacy regulations, and limited interpretability of model predictions. Recent domain adaptation and prompting methods using lightweight masked language models showed promising results with minimal training data and allow for application of well-established interpretability methods. We are first to present a systematic evaluation of advanced domain adaptation and prompting methods in a low-resource medical domain task, performing multiclass section classification on German doctor's letters. We evaluate a variety of models, model sizes, (further-pre)training and task settings, and conduct extensive class-wise evaluations supported by Shapley values to validate the quality of small-scale training data, and to ensure interpretability of model predictions. We show that in few-shot learning scenarios, a lightweight, domain-adapted pretrained language model, prompted with just 20 shots per section class, outperforms a traditional classification model, by increasing accuracy from 48.6% to 79.1%.