Quercia, Alessio
MRI Reconstruction with Regularized 3D Diffusion Model (R3DM)
Bangun, Arya, Cao, Zhuo, Quercia, Alessio, Scharr, Hanno, Pfaehler, Elisabeth
In order to speed up the acquisition time, MRI instruments acquire sub-sampled k-space data, a technique where only a fraction of the total k-space data points are sampled during the imaging process. Several attempts have been proposed to develop two-dimensional (2D) and three-dimensional (3D) image reconstruction techniques for sub-sampled k-space, as discussed in [11, 13, 31]. Advancements in 3D MR imaging methods can address the challenges posed by complex anatomical structures of human organs and plant growths. Consequently, the demand for developing 3D MR image reconstruction methods has intensified. Currently, most works reconstruct a 3D volumetric image by stacking 2D reconstructions because MR images are acquired slice by slice. This method doesn't consider the inter-dependency between the slices, thus can lead to inconsistencies and artifacts, as discussed in [4, 8, 50]. This particularly affects datasets that have equally distributed information and structures with high continuity on all dimensions, such as roots and vessels [4, 38, 50]. Before the deep learning-based models, which learn the data-driven prior, the model-based iterative reconstruction method proved its effectiveness in the 3D MRI reconstruction problem [15, 54]. The problem is formulated as an optimization problem where a data consistency term is applied to ensure fidelity, and a regularisation term, such as the Total Variation (TV) penalty [24] is utilized to provide general prior knowledge of MRI data.
Parameter-efficient Bayesian Neural Networks for Uncertainty-aware Depth Estimation
Paul, Richard D., Quercia, Alessio, Fortuin, Vincent, Nöh, Katharina, Scharr, Hanno
State-of-the-art computer vision tasks, like monocular depth estimation (MDE), rely heavily on large, modern Transformer-based architectures. However, their application in safety-critical domains demands reliable predictive performance and uncertainty quantification. While Bayesian neural networks provide a conceptually simple approach to serve those requirements, they suffer from the high dimensionality of the parameter space. Parameter-efficient fine-tuning (PEFT) methods, in particular low-rank adaptations (LoRA), have emerged as a popular strategy for adapting large-scale models to down-stream tasks by performing parameter inference on lower-dimensional subspaces. In this work, we investigate the suitability of PEFT methods for subspace Bayesian inference in large-scale Transformer-based vision models. We show that, indeed, combining BitFit, DiffFit, LoRA, and CoLoRA, a novel LoRA-inspired PEFT method, with Bayesian inference enables more robust and reliable predictive performance in MDE.