Potential and challenges of generative adversarial networks for super-resolution in 4D Flow MRI

Odeback, Oliver Welin, Balasubramanian, Arivazhagan Geetha, Schollenberger, Jonas, Ferdiand, Edward, Young, Alistair A., Figueroa, C. Alberto, Schnell, Susanne, Tammisola, Outi, Vinuesa, Ricardo, Granberg, Tobias, Fyrdahl, Alexander, Marlevi, David

arXiv.org Artificial Intelligence 

Time-resolved three-dimensional phase-contrast MRI (4D Flow MRI) enables non-invasive quantification of blood flow and derivation of hemodynamic parameters. However, its clinical application is limited by low spatial resolution and noise, particularly a ffecting velocity measurements near vessel walls. Machine learning-based super-resolution has shown promise in addressing these limitations, but challenges remain, not least in recovering near-wall velocities. Generative adversarial networks (GANs) o ff er a compelling solution, having demonstrated strong capabilities in restoring sharp boundaries in non-medical super-resolution settings. Y et, their application in 4D Flow MRI remains unexplored, with implementation challenged by known issues such as training instability and non-convergence. In this study, we investigate GAN-based super-resolution and denoising in 4D Flow MRI. Training and validation were conducted using patient-specific cerebrovascular in-silico models, converted into synthetic images via an MR-true reconstruction pipeline. A dedicated GAN architecture was implemented and evaluated across three adversarial loss functions: V anilla, Relativistic, and Wasserstein. Our results demonstrate that the proposed GAN improved near-wall velocity recovery compared to a non-adversarial reference (vNRMSE: 6.9% vs. 9.6%); however, that implementation specifics are critical for stable network training. While V anilla and Relativistic GANs proved unstable compared to generator-only training (vNRMSE: 8.1% and 7.8% vs. 7.2%), a Wasserstein GAN demonstrated optimal stability and incremental improvement (vNRMSE: 6.9% vs. 7.2%). Introduction Single image super-resolution (SISR) is the task of reconstructing a high-resolution (HR) image from its low-resolution (LR) counterpart, recovering features blurred or not fully conveyed in the input LR data. While traditional deterministic methods have been extensively studied [4, 5, 6], deep learning approaches - particularly those based on convolutional neural networks (CNNs) - have become the dominant strategy owing to their ability to learn complex spatial mappings directly from data [6, 7, 8, 9, 10, 11, 12]. In recent years, both network architectures and training strategies have evolved significantly in this domain [8, 9, 12, 13, 14, 15], consistently improving super-resolution performance.