Goto

Collaborating Authors

 brain mri


Sparsity-Driven Parallel Imaging Consistency for Improved Self-Supervised MRI Reconstruction

Alçalar, Yaşar Utku, Akçakaya, Mehmet

arXiv.org Artificial Intelligence

Physics-driven deep learning (PD-DL) models have proven to be a powerful approach for improved reconstruction of rapid MRI scans. In order to train these models in scenarios where fully-sampled reference data is unavailable, self-supervised learning has gained prominence. However, its application at high acceleration rates frequently introduces artifacts, compromising image fidelity. To mitigate this shortcoming, we propose a novel way to train PD-DL networks via carefully-designed perturbations. In particular, we enhance the k-space masking idea of conventional self-supervised learning with a novel consistency term that assesses the model's ability to accurately predict the added perturbations in a sparse domain, leading to more reliable and artifact-free reconstructions. The results obtained from the fastMRI knee and brain datasets show that the proposed training strategy effectively reduces aliasing artifacts and mitigates noise amplification at high acceleration rates, outperforming state-of-the-art self-supervised methods both visually and quantitatively.


NOVA: A Benchmark for Anomaly Localization and Clinical Reasoning in Brain MRI

Bercea, Cosmin I., Li, Jun, Raffler, Philipp, Riedel, Evamaria O., Schmitzer, Lena, Kurz, Angela, Bitzer, Felix, Roßmüller, Paula, Canisius, Julian, Beyrle, Mirjam L., Liu, Che, Bai, Wenjia, Kainz, Bernhard, Schnabel, Julia A., Wiestler, Benedikt

arXiv.org Artificial Intelligence

In many real-world applications, deployed models encounter inputs that differ from the data seen during training. Out-of-distribution detection identifies whether an input stems from an unseen distribution, while open-world recognition flags such inputs to ensure the system remains robust as ever-emerging, previously $unknown$ categories appear and must be addressed without retraining. Foundation and vision-language models are pre-trained on large and diverse datasets with the expectation of broad generalization across domains, including medical imaging. However, benchmarking these models on test sets with only a few common outlier types silently collapses the evaluation back to a closed-set problem, masking failures on rare or truly novel conditions encountered in clinical use. We therefore present $NOVA$, a challenging, real-life $evaluation-only$ benchmark of $\sim$900 brain MRI scans that span 281 rare pathologies and heterogeneous acquisition protocols. Each case includes rich clinical narratives and double-blinded expert bounding-box annotations. Together, these enable joint assessment of anomaly localisation, visual captioning, and diagnostic reasoning. Because NOVA is never used for training, it serves as an $extreme$ stress-test of out-of-distribution generalisation: models must bridge a distribution gap both in sample appearance and in semantic space. Baseline results with leading vision-language models (GPT-4o, Gemini 2.0 Flash, and Qwen2.5-VL-72B) reveal substantial performance drops across all tasks, establishing NOVA as a rigorous testbed for advancing models that can detect, localize, and reason about truly unknown anomalies.


Enhancing Reconstruction-Based Out-of-Distribution Detection in Brain MRI with Model and Metric Ensembles

Huijben, Evi M. C., Amirrajab, Sina, Pluim, Josien P. W.

arXiv.org Artificial Intelligence

Out-of-distribution (OOD) detection is crucial for safely deploying automated medical image analysis systems, as abnormal patterns in images could hamper their performance. However, OOD detection in medical imaging remains an open challenge, and we address three gaps: the underexplored potential of a simple OOD detection model, the lack of optimization of deep learning strategies specifically for OOD detection, and the selection of appropriate reconstruction metrics. In this study, we investigated the effectiveness of a reconstruction-based autoencoder for unsupervised detection of synthetic artifacts in brain MRI. We evaluated the general reconstruction capability of the model, analyzed the impact of the selected training epoch and reconstruction metrics, assessed the potential of model and/or metric ensembles, and tested the model on a dataset containing a diverse range of artifacts. Among the metrics assessed, the contrast component of SSIM and LPIPS consistently outperformed others in detecting homogeneous circular anomalies. By combining two well-converged models and using LPIPS and contrast as reconstruction metrics, we achieved a pixel-level area under the Precision-Recall curve of 0.66. Furthermore, with the more realistic OOD dataset, we observed that the detection performance varied between artifact types; local artifacts were more difficult to detect, while global artifacts showed better detection results. These findings underscore the importance of carefully selecting metrics and model configurations, and highlight the need for tailored approaches, as standard deep learning approaches do not always align with the unique needs of OOD detection.


Brain Tumor Segmentation (BraTS) Challenge 2024: Meningioma Radiotherapy Planning Automated Segmentation

LaBella, Dominic, Schumacher, Katherine, Mix, Michael, Leu, Kevin, McBurney-Lin, Shan, Nedelec, Pierre, Villanueva-Meyer, Javier, Shapey, Jonathan, Vercauteren, Tom, Chia, Kazumi, Al-Salihi, Omar, Leu, Justin, Halasz, Lia, Velichko, Yury, Wang, Chunhao, Kirkpatrick, John, Floyd, Scott, Reitman, Zachary J., Mullikin, Trey, Bagci, Ulas, Sachdev, Sean, Hattangadi-Gluth, Jona A., Seibert, Tyler, Farid, Nikdokht, Puett, Connor, Pease, Matthew W., Shiue, Kevin, Anwar, Syed Muhammad, Faghani, Shahriar, Haider, Muhammad Ammar, Warman, Pranav, Albrecht, Jake, Jakab, András, Moassefi, Mana, Chung, Verena, Aristizabal, Alejandro, Karargyris, Alexandros, Kassem, Hasan, Pati, Sarthak, Sheller, Micah, Huang, Christina, Coley, Aaron, Ghanta, Siddharth, Schneider, Alex, Sharp, Conrad, Saluja, Rachit, Kofler, Florian, Lohmann, Philipp, Vollmuth, Phillipp, Gagnon, Louis, Adewole, Maruf, Li, Hongwei Bran, Kazerooni, Anahita Fathi, Tahon, Nourel Hoda, Anazodo, Udunna, Moawad, Ahmed W., Menze, Bjoern, Linguraru, Marius George, Aboian, Mariam, Wiestler, Benedikt, Baid, Ujjwal, Conte, Gian-Marco, Rauschecker, Andreas M. T., Nada, Ayman, Abayazeed, Aly H., Huang, Raymond, de Verdier, Maria Correia, Rudie, Jeffrey D., Bakas, Spyridon, Calabrese, Evan

arXiv.org Artificial Intelligence

The 2024 Brain Tumor Segmentation Meningioma Radiotherapy (BraTS-MEN-RT) challenge aims to advance automated segmentation algorithms using the largest known multi-institutional dataset of radiotherapy planning brain MRIs with expert-annotated target labels for patients with intact or post-operative meningioma that underwent either conventional external beam radiotherapy or stereotactic radiosurgery. Each case includes a defaced 3D post-contrast T1-weighted radiotherapy planning MRI in its native acquisition space, accompanied by a single-label "target volume" representing the gross tumor volume (GTV) and any at-risk post-operative site. Target volume annotations adhere to established radiotherapy planning protocols, ensuring consistency across cases and institutions. For pre-operative meningiomas, the target volume encompasses the entire GTV and associated nodular dural tail, while for post-operative cases, it includes at-risk resection cavity margins as determined by the treating institution. Case annotations were reviewed and approved by expert neuroradiologists and radiation oncologists. Participating teams will develop, containerize, and evaluate automated segmentation models using this comprehensive dataset. Model performance will be assessed using the lesion-wise Dice Similarity Coefficient and the 95% Hausdorff distance. The top-performing teams will be recognized at the Medical Image Computing and Computer Assisted Intervention Conference in October 2024. BraTS-MEN-RT is expected to significantly advance automated radiotherapy planning by enabling precise tumor segmentation and facilitating tailored treatment, ultimately improving patient outcomes.


SynthBrainGrow: Synthetic Diffusion Brain Aging for Longitudinal MRI Data Generation in Young People

Zapaishchykova, Anna, Kann, Benjamin H., Tak, Divyanshu, Ye, Zezhong, Haas-Kogan, Daphne A., Aerts, Hugo J. W. L.

arXiv.org Artificial Intelligence

Synthetic longitudinal brain MRI simulates brain aging and would enable more efficient research on neurodevelopmental and neurodegenerative conditions. Synthetically generated, age-adjusted brain images could serve as valuable alternatives to costly longitudinal imaging acquisitions, serve as internal controls for studies looking at the effects of environmental or therapeutic modifiers on brain development, and allow data augmentation for diverse populations. In this paper, we present a diffusion-based approach called SynthBrainGrow for synthetic brain aging with a two-year step. To validate the feasibility of using synthetically-generated data on downstream tasks, we compared structural volumetrics of two-year-aged brains against syntheticallyaged brain MRI. Results show that SynthBrainGrow can accurately capture substructure volumetrics and simulate structural changes such as ventricle enlargement and cortical thinning. Our approach provides a novel way to generate longitudinal brain datasets from cross-sectional data to enable augmented training and benchmarking of computational tools for analyzing lifespan trajectories. This work signifies an important advance in generative modeling to synthesize realistic longitudinal data with limited lifelong MRI scans. The code is available at XXX. Keywords: Generative Models, Diffusion Probabilistic Models, Neural aging.


Guided Reconstruction with Conditioned Diffusion Models for Unsupervised Anomaly Detection in Brain MRIs

Behrendt, Finn, Bhattacharya, Debayan, Mieling, Robin, Maack, Lennart, Krüger, Julia, Opfer, Roland, Schlaefer, Alexander

arXiv.org Artificial Intelligence

Unsupervised anomaly detection in Brain MRIs aims to identify abnormalities as outliers from a healthy training distribution. Reconstruction-based approaches that use generative models to learn to reconstruct healthy brain anatomy are commonly used for this task. Diffusion models are an emerging class of deep generative models that show great potential regarding reconstruction fidelity. However, they face challenges in preserving intensity characteristics in the reconstructed images, limiting their performance in anomaly detection. To address this challenge, we propose to condition the denoising mechanism of diffusion models with additional information about the image to reconstruct coming from a latent representation of the noise-free input image. This conditioning enables high-fidelity reconstruction of healthy brain structures while aligning local intensity characteristics of input-reconstruction pairs. We evaluate our method's reconstruction quality, domain adaptation features and finally segmentation performance on publicly available data sets with various pathologies. Using our proposed conditioning mechanism we can reduce the false-positive predictions and enable a more precise delineation of anomalies which significantly enhances the anomaly detection performance compared to established state-of-the-art approaches to unsupervised anomaly detection in brain MRI. Furthermore, our approach shows promise in domain adaptation across different MRI acquisitions and simulated contrasts, a crucial property of general anomaly detection methods.


Zero-Shot Self-Supervised Learning for MRI Reconstruction

Yaman, Burhaneddin, Hosseini, Seyed Amir Hossein, Akçakaya, Mehmet

arXiv.org Artificial Intelligence

Deep learning (DL) has emerged as a powerful tool for accelerated MRI reconstruction, but often necessitates a database of fully-sampled measurements for training. Recent self-supervised and unsupervised learning approaches enable training without fully-sampled data. However, a database of undersampled measurements may not be available in many scenarios, especially for scans involving contrast or translational acquisitions in development. Moreover, recent studies show that database-trained models may not generalize well when the unseen measurements differ in terms of sampling pattern, acceleration rate, SNR, image contrast, and anatomy. Such challenges necessitate a new methodology to enable subject-specific DL MRI reconstruction without external training datasets, since it is clinically imperative to provide high-quality reconstructions that can be used to identify lesions/disease for every individual. In this work, we propose a zeroshot self-supervised learning approach to perform subject-specific accelerated DL MRI reconstruction to tackle these issues. The proposed approach partitions the available measurements from a single scan into three disjoint sets. Two of these sets are used to enforce data consistency and define loss during training for selfsupervision, while the last set serves to self-validate, establishing an early stopping criterion. In the presence of models pre-trained on a database with different image characteristics, we show that the proposed approach can be combined with transfer learning for faster convergence time and reduced computational complexity. Magnetic resonance imaging (MRI) is a non-invasive, radiation-free medical imaging modality that provides excellent soft tissue contrast for diagnostic purposes.


Beware of diffusion models for synthesizing medical images -- A comparison with GANs in terms of memorizing brain MRI and chest x-ray images

Akbar, Muhammad Usman, Wang, Wuhao, Eklund, Anders

arXiv.org Artificial Intelligence

Diffusion models were initially developed for text-to-image generation and are now being utilized to generate high-quality synthetic images. Preceded by GANs, diffusion models have shown impressive results using various evaluation metrics. However, commonly used metrics such as FID and IS are not suitable for determining whether diffusion models are simply reproducing the training images. Here we train StyleGAN and diffusion models, using BRATS20, BRATS21 and a chest x-ray pneumonia dataset, to synthesize brain MRI and chest x-ray images, and measure the correlation between the synthe4c images and all training images. Our results show that diffusion models are more likely to memorize the training images, compared to StyleGAN, especially for small datasets and when using 2D slices from 3D volumes. Researchers should be careful when using diffusion models for medical imaging, if the final goal is to share the synthe4c images


AI enables noninvasive tumor pathology mapping on brain MRI

#artificialintelligence

Generally, these results indicate that our radio-pathomic model can provide clinicians with noninvasive maps of tumor pathology, containing information previously only available via surgical resection,