Wetscherek, Maria
Learning to Exploit Temporal Structure for Biomedical Vision-Language Processing
Bannur, Shruthi, Hyland, Stephanie, Liu, Qianchu, Pérez-García, Fernando, Ilse, Maximilian, Castro, Daniel C., Boecking, Benedikt, Sharma, Harshita, Bouzid, Kenza, Thieme, Anja, Schwaighofer, Anton, Wetscherek, Maria, Lungren, Matthew P., Nori, Aditya, Alvarez-Valle, Javier, Oktay, Ozan
Self-supervised learning in vision-language processing exploits semantic alignment between imaging and text modalities. Prior work in biomedical VLP has mostly relied on the alignment of single image and report pairs even though clinical notes commonly refer to prior images. This does not only introduce poor alignment between the modalities but also a missed opportunity to exploit rich self-supervision through existing temporal content in the data. In this work, we explicitly account for prior images and reports when available during both training and fine-tuning. Our approach, named BioViL-T, uses a CNN-Transformer hybrid multi-image encoder trained jointly with a text model. It is designed to be versatile to arising challenges such as pose variations and missing input images across time. The resulting model excels on downstream tasks both in single- and multi-image setups, achieving state-of-the-art performance on (I) progression classification, (II) phrase grounding, and (III) report generation, whilst offering consistent improvements on disease classification and sentence-similarity tasks. We release a novel multi-modal temporal benchmark dataset, MS-CXR-T, to quantify the quality of vision-language representations in terms of temporal semantics. Our experimental results show the advantages of incorporating prior images and reports to make most use of the data.
Making the Most of Text Semantics to Improve Biomedical Vision--Language Processing
Boecking, Benedikt, Usuyama, Naoto, Bannur, Shruthi, Castro, Daniel C., Schwaighofer, Anton, Hyland, Stephanie, Wetscherek, Maria, Naumann, Tristan, Nori, Aditya, Alvarez-Valle, Javier, Poon, Hoifung, Oktay, Ozan
Multi-modal data abounds in biomedicine, such as radiology images and reports. Interpreting this data at scale is essential for improving clinical care and accelerating clinical research. Biomedical text with its complex semantics poses additional challenges in vision--language modelling compared to the general domain, and previous work has used insufficiently adapted models that lack domain-specific language understanding. In this paper, we show that principled textual semantic modelling can substantially improve contrastive learning in self-supervised vision--language processing. We release a language model that achieves state-of-the-art results in radiology natural language inference through its improved vocabulary and novel language pretraining objective leveraging semantics and discourse characteristics in radiology reports. Further, we propose a self-supervised joint vision--language approach with a focus on better text modelling. It establishes new state of the art results on a wide range of publicly available benchmarks, in part by leveraging our new domain-specific language model. We release a new dataset with locally-aligned phrase grounding annotations by radiologists to facilitate the study of complex semantic modelling in biomedical vision--language processing. A broad evaluation, including on this new dataset, shows that our contrastive learning approach, aided by textual-semantic modelling, outperforms prior methods in segmentation tasks, despite only using a global-alignment objective.