Küstner, Thomas
Highly efficient non-rigid registration in k-space with application to cardiac Magnetic Resonance Imaging
Ghoul, Aya, Hammernik, Kerstin, Lingg, Andreas, Krumm, Patrick, Rueckert, Daniel, Gatidis, Sergios, Küstner, Thomas
In Magnetic Resonance Imaging (MRI), high temporal-resolved motion can be useful for image acquisition and reconstruction, MR-guided radiotherapy, dynamic contrast-enhancement, flow and perfusion imaging, and functional assessment of motion patterns in cardiovascular, abdominal, peristaltic, fetal, or musculoskeletal imaging. Conventionally, these motion estimates are derived through image-based registration, a particularly challenging task for complex motion patterns and high dynamic resolution. The accelerated scans in such applications result in imaging artifacts that compromise the motion estimation. In this work, we propose a novel self-supervised deep learning-based framework, dubbed the Local-All Pass Attention Network (LAPANet), for non-rigid motion estimation directly from the acquired accelerated Fourier space, i.e. k-space. The proposed approach models non-rigid motion as the cumulative sum of local translational displacements, following the Local All-Pass (LAP) registration technique. LAPANet was evaluated on cardiac motion estimation across various sampling trajectories and acceleration rates. Our results demonstrate superior accuracy compared to prior conventional and deep learning-based registration methods, accommodating as few as 2 lines/frame in a Cartesian trajectory and 3 spokes/frame in a non-Cartesian trajectory. The achieved high temporal resolution (less than 5 ms) for non-rigid motion opens new avenues for motion detection, tracking and correction in dynamic and real-time MRI applications.
Attention-aware non-rigid image registration for accelerated MR imaging
Ghoul, Aya, Pan, Jiazhen, Lingg, Andreas, Kübler, Jens, Krumm, Patrick, Hammernik, Kerstin, Rueckert, Daniel, Gatidis, Sergios, Küstner, Thomas
Accurate motion estimation at high acceleration factors enables rapid motion-compensated reconstruction in Magnetic Resonance Imaging (MRI) without compromising the diagnostic image quality. In this work, we introduce an attention-aware deep learning-based framework that can perform non-rigid pairwise registration for fully sampled and accelerated MRI. We extract local visual representations to build similarity maps between the registered image pairs at multiple resolution levels and additionally leverage long-range contextual information using a transformer-based module to alleviate ambiguities in the presence of artifacts caused by undersampling. We combine local and global dependencies to perform simultaneous coarse and fine motion estimation. The proposed method was evaluated on in-house acquired fully sampled and accelerated data of 101 patients and 62 healthy subjects undergoing cardiac and thoracic MRI. The impact of motion estimation accuracy on the downstream task of motion-compensated reconstruction was analyzed. We demonstrate that our model derives reliable and consistent motion fields across different sampling trajectories (Cartesian and radial) and acceleration factors of up to 16x for cardiac motion and 30x for respiratory motion and achieves superior image quality in motion-compensated reconstruction qualitatively and quantitatively compared to conventional and recent deep learning-based approaches. The code is publicly available at https://github.com/lab-midas/GMARAFT.
MedShapeNet -- A Large-Scale Dataset of 3D Medical Shapes for Computer Vision
Li, Jianning, Zhou, Zongwei, Yang, Jiancheng, Pepe, Antonio, Gsaxner, Christina, Luijten, Gijs, Qu, Chongyu, Zhang, Tiezheng, Chen, Xiaoxi, Li, Wenxuan, Wodzinski, Marek, Friedrich, Paul, Xie, Kangxian, Jin, Yuan, Ambigapathy, Narmada, Nasca, Enrico, Solak, Naida, Melito, Gian Marco, Vu, Viet Duc, Memon, Afaque R., Schlachta, Christopher, De Ribaupierre, Sandrine, Patel, Rajnikant, Eagleson, Roy, Chen, Xiaojun, Mächler, Heinrich, Kirschke, Jan Stefan, de la Rosa, Ezequiel, Christ, Patrick Ferdinand, Li, Hongwei Bran, Ellis, David G., Aizenberg, Michele R., Gatidis, Sergios, Küstner, Thomas, Shusharina, Nadya, Heller, Nicholas, Andrearczyk, Vincent, Depeursinge, Adrien, Hatt, Mathieu, Sekuboyina, Anjany, Löffler, Maximilian, Liebl, Hans, Dorent, Reuben, Vercauteren, Tom, Shapey, Jonathan, Kujawa, Aaron, Cornelissen, Stefan, Langenhuizen, Patrick, Ben-Hamadou, Achraf, Rekik, Ahmed, Pujades, Sergi, Boyer, Edmond, Bolelli, Federico, Grana, Costantino, Lumetti, Luca, Salehi, Hamidreza, Ma, Jun, Zhang, Yao, Gharleghi, Ramtin, Beier, Susann, Sowmya, Arcot, Garza-Villarreal, Eduardo A., Balducci, Thania, Angeles-Valdez, Diego, Souza, Roberto, Rittner, Leticia, Frayne, Richard, Ji, Yuanfeng, Ferrari, Vincenzo, Chatterjee, Soumick, Dubost, Florian, Schreiber, Stefanie, Mattern, Hendrik, Speck, Oliver, Haehn, Daniel, John, Christoph, Nürnberger, Andreas, Pedrosa, João, Ferreira, Carlos, Aresta, Guilherme, Cunha, António, Campilho, Aurélio, Suter, Yannick, Garcia, Jose, Lalande, Alain, Vandenbossche, Vicky, Van Oevelen, Aline, Duquesne, Kate, Mekhzoum, Hamza, Vandemeulebroucke, Jef, Audenaert, Emmanuel, Krebs, Claudia, van Leeuwen, Timo, Vereecke, Evie, Heidemeyer, Hauke, Röhrig, Rainer, Hölzle, Frank, Badeli, Vahid, Krieger, Kathrin, Gunzer, Matthias, Chen, Jianxu, van Meegdenburg, Timo, Dada, Amin, Balzer, Miriam, Fragemann, Jana, Jonske, Frederic, Rempe, Moritz, Malorodov, Stanislav, Bahnsen, Fin H., Seibold, Constantin, Jaus, Alexander, Marinov, Zdravko, Jaeger, Paul F., Stiefelhagen, Rainer, Santos, Ana Sofia, Lindo, Mariana, Ferreira, André, Alves, Victor, Kamp, Michael, Abourayya, Amr, Nensa, Felix, Hörst, Fabian, Brehmer, Alexander, Heine, Lukas, Hanusrichter, Yannik, Weßling, Martin, Dudda, Marcel, Podleska, Lars E., Fink, Matthias A., Keyl, Julius, Tserpes, Konstantinos, Kim, Moon-Sung, Elhabian, Shireen, Lamecker, Hans, Zukić, Dženan, Paniagua, Beatriz, Wachinger, Christian, Urschler, Martin, Duong, Luc, Wasserthal, Jakob, Hoyer, Peter F., Basu, Oliver, Maal, Thomas, Witjes, Max J. H., Schiele, Gregor, Chang, Ti-chiun, Ahmadi, Seyed-Ahmad, Luo, Ping, Menze, Bjoern, Reyes, Mauricio, Deserno, Thomas M., Davatzikos, Christos, Puladi, Behrus, Fua, Pascal, Yuille, Alan L., Kleesiek, Jens, Egger, Jan
Prior to the deep learning era, shape was commonly used to describe the objects. Nowadays, state-of-the-art (SOTA) algorithms in medical imaging are predominantly diverging from computer vision, where voxel grids, meshes, point clouds, and implicit surface models are used. This is seen from numerous shape-related publications in premier vision conferences as well as the growing popularity of ShapeNet (about 51,300 models) and Princeton ModelNet (127,915 models). For the medical domain, we present a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D models of surgical instrument, called MedShapeNet, created to facilitate the translation of data-driven vision algorithms to medical applications and to adapt SOTA vision algorithms to medical problems. As a unique feature, we directly model the majority of shapes on the imaging data of real patients. As of today, MedShapeNet includes 23 dataset with more than 100,000 shapes that are paired with annotations (ground truth). Our data is freely accessible via a web interface and a Python application programming interface (API) and can be used for discriminative, reconstructive, and variational benchmarks as well as various applications in virtual, augmented, or mixed reality, and 3D printing. Exemplary, we present use cases in the fields of classification of brain tumors, facial and skull reconstructions, multi-class anatomy completion, education, and 3D printing. In future, we will extend the data and improve the interfaces. The project pages are: https://medshapenet.ikim.nrw/ and https://github.com/Jianningli/medshapenet-feedback
Uncertainty Estimation and Propagation in Accelerated MRI Reconstruction
Fischer, Paul, Küstner, Thomas, Baumgartner, Christian F.
MRI reconstruction techniques based on deep learning have led to unprecedented reconstruction quality especially in highly accelerated settings. However, deep learning techniques are also known to fail unexpectedly and hallucinate structures. This is particularly problematic if reconstructions are directly used for downstream tasks such as real-time treatment guidance or automated extraction of clinical paramters (e.g. via segmentation). Well-calibrated uncertainty quantification will be a key ingredient for safe use of this technology in clinical practice. In this paper we propose a novel probabilistic reconstruction technique (PHiRec) building on the idea of conditional hierarchical variational autoencoders. We demonstrate that our proposed method produces high-quality reconstructions as well as uncertainty quantification that is substantially better calibrated than several strong baselines. We furthermore demonstrate how uncertainties arising in the MR econstruction can be propagated to a downstream segmentation task, and show that PHiRec also allows well-calibrated estimation of segmentation uncertainties that originated in the MR reconstruction process.
Reconstruction-driven motion estimation for motion-compensated MR CINE imaging
Pan, Jiazhen, Huang, Wenqi, Rueckert, Daniel, Küstner, Thomas, Hammernik, Kerstin
In cardiac CINE, motion-compensated MR reconstruction (MCMR) is an effective approach to address highly undersampled acquisitions by incorporating motion information between frames. In this work, we propose a deep learning-based framework to address the MCMR problem efficiently. Contrary to state-of-the-art (SOTA) MCMR methods which break the original problem into two sub-optimization problems, i.e. motion estimation and reconstruction, we formulate this problem as a single entity with one single optimization. We discard the canonical motion-warping loss (similarity measurement between motion-warped images and target images) to estimate the motion, but drive the motion estimation process directly by the final reconstruction performance. The higher reconstruction quality is achieved without using any smoothness loss terms and without iterative processing between motion estimation and reconstruction. Therefore, we avoid non-trivial loss weighting factors tuning and time-consuming iterative processing. Experiments on 43 in-house acquired 2D CINE datasets indicate that the proposed MCMR framework can deliver artifact-free motion estimation and high-quality MR images even for imaging accelerations up to 20x. The proposed framework is compared to SOTA non-MCMR and MCMR methods and outperforms these methods qualitatively and quantitatively in all applied metrics across all experiments with different acceleration rates.
LAPNet: Non-rigid Registration derived in k-space for Magnetic Resonance Imaging
Küstner, Thomas, Pan, Jiazhen, Qi, Haikun, Cruz, Gastao, Gilliam, Christopher, Blu, Thierry, Yang, Bin, Gatidis, Sergios, Botnar, René, Prieto, Claudia
Physiological motion, such as cardiac and respiratory motion, during Magnetic Resonance (MR) image acquisition can cause image artifacts. Motion correction techniques have been proposed to compensate for these types of motion during thoracic scans, relying on accurate motion estimation from undersampled motion-resolved reconstruction. A particular interest and challenge lie in the derivation of reliable non-rigid motion fields from the undersampled motion-resolved data. Motion estimation is usually formulated in image space via diffusion, parametric-spline, or optical flow methods. However, image-based registration can be impaired by remaining aliasing artifacts due to the undersampled motion-resolved reconstruction. In this work, we describe a formalism to perform non-rigid registration directly in the sampled Fourier space, i.e. k-space. We propose a deep-learning based approach to perform fast and accurate non-rigid registration from the undersampled k-space data. The basic working principle originates from the Local All-Pass (LAP) technique, a recently introduced optical flow-based registration. The proposed LAPNet is compared against traditional and deep learning image-based registrations and tested on fully-sampled and highly-accelerated (with two undersampling strategies) 3D respiratory motion-resolved MR images in a cohort of 40 patients with suspected liver or lung metastases and 25 healthy subjects. The proposed LAPNet provided consistent and superior performance to image-based approaches throughout different sampling trajectories and acceleration factors.