Tamir, Jonathan I.
Enhancing Deep Learning-Driven Multi-Coil MRI Reconstruction via Self-Supervised Denoising
Aali, Asad, Arvinte, Marius, Kumar, Sidharth, Arefeen, Yamin I., Tamir, Jonathan I.
We examine the effect of incorporating self-supervised denoising as a pre-processing step for training deep learning (DL) based reconstruction methods on data corrupted by Gaussian noise. K-space data employed for training are typically multi-coil and inherently noisy. Although DL-based reconstruction methods trained on fully sampled data can enable high reconstruction quality, obtaining large, noise-free datasets is impractical. We leverage Generalized Stein's Unbiased Risk Estimate (GSURE) for denoising. We evaluate two DL-based reconstruction methods: Diffusion Probabilistic Models (DPMs) and Model-Based Deep Learning (MoDL). We evaluate the impact of denoising on the performance of these DL-based methods in solving accelerated multi-coil magnetic resonance imaging (MRI) reconstruction. The experiments were carried out on T2-weighted brain and fat-suppressed proton-density knee scans. We observed that self-supervised denoising enhances the quality and efficiency of MRI reconstructions across various scenarios. Specifically, employing denoised images rather than noisy counterparts when training DL networks results in lower normalized root mean squared error (NRMSE), higher structural similarity index measure (SSIM) and peak signal-to-noise ratio (PSNR) across different SNR levels, including 32dB, 22dB, and 12dB for T2-weighted brain data, and 24dB, 14dB, and 4dB for fat-suppressed knee data. Overall, we showed that denoising is an essential pre-processing technique capable of improving the efficacy of DL-based MRI reconstruction methods under diverse conditions. By refining the quality of input data, denoising can enable the training of more effective DL networks, potentially bypassing the need for noise-free reference MRI scans.
Accelerated, Robust Lower-Field Neonatal MRI with Generative Models
Arefeen, Yamin, Levac, Brett, Tamir, Jonathan I.
Neonatal Magnetic Resonance Imaging (MRI) enables non-invasive assessment of potential brain abnormalities during the critical phase of early life development. Recently, interest has developed in lower field (i.e., below 1.5 Tesla) MRI systems that trade-off magnetic field strength for portability and access in the neonatal intensive care unit (NICU). Unfortunately, lower-field neonatal MRI still suffers from long scan times and motion artifacts that can limit its clinical utility for neonates. This work improves motion robustness and accelerates lower field neonatal MRI through diffusion-based generative modeling and signal processing based motion modeling. We first gather a training dataset of clinical neonatal MRI images. Then we train a diffusion-based generative model to learn the statistical distribution of fully-sampled images by applying several signal processing methods to handle the lower signal-to-noise ratio and lower quality of our MRI images. Finally, we present experiments demonstrating the utility of our generative model to improve reconstruction performance across two tasks: accelerated MRI and motion correction.
Evaluating Neural Networks for Early Maritime Threat Detection
Tella, Dhanush, Tiriveedhi, Chandra Teja, Rishe, Naphtali, Tamir, Dan E., Tamir, Jonathan I.
We consider the task of classifying trajectories of boat activities as a proxy for assessing maritime threats. Previous approaches have considered entropy-based metrics for clustering boat activity into three broad categories: random walk, following, and chasing. Here, we comprehensively assess the accuracy of neural network-based approaches as alternatives to entropy-based clustering. We train four neural network models and compare them to shallow learning using synthetic data. We also investigate the accuracy of models as time steps increase and with and without rotated data. To improve test-time robustness, we normalize trajectories and perform rotation-based data augmentation. Our results show that deep networks can achieve a test-set accuracy of up to 100% on a full trajectory, with graceful degradation as the number of time steps decreases, outperforming entropy-based clustering.
Ambient Diffusion Posterior Sampling: Solving Inverse Problems with Diffusion Models trained on Corrupted Data
Aali, Asad, Daras, Giannis, Levac, Brett, Kumar, Sidharth, Dimakis, Alexandros G., Tamir, Jonathan I.
We provide a framework for solving inverse problems with diffusion models learned from linearly corrupted data. Our method, Ambient Diffusion Posterior Sampling (A-DPS), leverages a generative model pre-trained on one type of corruption (e.g. image inpainting) to perform posterior sampling conditioned on measurements from a potentially different forward process (e.g. image blurring). We test the efficacy of our approach on standard natural image datasets (CelebA, FFHQ, and AFHQ) and we show that A-DPS can sometimes outperform models trained on clean data for several image restoration tasks in both speed and performance. We further extend the Ambient Diffusion framework to train MRI models with access only to Fourier subsampled multi-coil MRI measurements at various acceleration factors (R=2, 4, 6, 8). We again observe that models trained on highly subsampled data are better priors for solving inverse problems in the high acceleration regime than models trained on fully sampled data. We open-source our code and the trained Ambient Diffusion MRI models: https://github.com/utcsilab/ambient-diffusion-mri .
Optimizing Sampling Patterns for Compressed Sensing MRI with Diffusion Generative Models
Ravula, Sriram, Levac, Brett, Jalal, Ajil, Tamir, Jonathan I., Dimakis, Alexandros G.
Diffusion-based generative models have been used as powerful priors for magnetic resonance imaging (MRI) reconstruction. We present a learning method to optimize sub-sampling patterns for compressed sensing multi-coil MRI that leverages pre-trained diffusion generative models. Crucially, during training we use a single-step reconstruction based on the posterior mean estimate given by the diffusion model and the MRI measurement process. Experiments across varying anatomies, acceleration factors, and pattern types show that sampling operators learned with our method lead to competitive, and in the case of 2D patterns, improved reconstructions compared to baseline patterns. Our method requires as few as five training images to learn effective sampling patterns.
Solving Inverse Problems with Score-Based Generative Priors learned from Noisy Data
Aali, Asad, Arvinte, Marius, Kumar, Sidharth, Tamir, Jonathan I.
We present SURE-Score: an approach for learning score-based generative models using training samples corrupted by additive Gaussian noise. When a large training set of clean samples is available, solving inverse problems via score-based (diffusion) generative models trained on the underlying fully-sampled data distribution has recently been shown to outperform end-to-end supervised deep learning. In practice, such a large collection of training data may be prohibitively expensive to acquire in the first place. In this work, we present an approach for approximately learning a score-based generative model of the clean distribution, from noisy training data. We formulate and justify a novel loss function that leverages Stein's unbiased risk estimate to jointly denoise the data and learn the score function via denoising score matching, while using only the noisy samples. We demonstrate the generality of SURE-Score by learning priors and applying posterior sampling to ill-posed inverse problems in two practical applications from different domains: compressive wireless multiple-input multiple-output channel estimation and accelerated 2D multi-coil magnetic resonance imaging reconstruction, where we demonstrate competitive reconstruction performance when learning at signal-to-noise ratio values of 0 and 10 dB, respectively.
Robust Compressed Sensing MRI with Deep Generative Priors
Jalal, Ajil, Arvinte, Marius, Daras, Giannis, Price, Eric, Dimakis, Alexandros G., Tamir, Jonathan I.
Compressed sensing [23, 15] has enabled reductions to the number of measurements needed for successful reconstruction in a variety of imaging inverse problems. In particular, it has led to shorter scan times for magnetic resonance imaging (MRI) [58, 86], and most MRI vendors have released products leveraging this framework to accelerate clinical workflows. Despite their successes, sparsity-based methods are limited by the achievable acceleration rates, as the sparsity assumptions are either hand-crafted or are limited to simple learned sparse codes [68, 69]. More recently, deep learning techniques have been used as powerful data-driven reconstruction methods for inverse problems [47, 64]. There are two broad families of deep learning inversion techniques [64]: end-to-end supervised and distribution-learning approaches. End-to-end supervised techniques use a training set of measured images and deploy convolutional neural networks (CNNs) and other architectures to learn the inverse mapping from measurements to image. Network architectures that include both CNN blocks and the imaging forward model have grown in popularity, as they combine deep learning with the compressed sensing optimization framework, see e.g.