Goto

Collaborating Authors

 brain image


USB: Unified Synthetic Brain Framework for Bidirectional Pathology-Healthy Generation and Editing

Wang, Jun, Liu, Peirong

arXiv.org Artificial Intelligence

Understanding the relationship between pathological and healthy brain structures is fundamental to neuroimaging, connecting disease diagnosis and detection with modeling, prediction, and treatment planning. However, paired pathological-healthy data are extremely difficult to obtain, as they rely on pre- and post-treatment imaging, constrained by clinical outcomes and longitudinal data availability. Consequently, most existing brain image generation and editing methods focus on visual quality yet remain domain-specific, treating pathological and healthy image modeling independently. We introduce USB (Unified Synthetic Brain), the first end-to-end framework that unifies bidirectional generation and editing of pathological and healthy brain images. USB models the joint distribution of lesions and brain anatomy through a paired diffusion mechanism and achieves both pathological and healthy image generation. A consistency guidance algorithm further preserves anatomical consistency and lesion correspondence during bidirectional pathology-healthy editing. Extensive experiments on six public brain MRI datasets including healthy controls, stroke, and Alzheimer's patients, demonstrate USB's ability to produce diverse and realistic results. By establishing the first unified benchmark for brain image generation and editing, USB opens opportunities for scalable dataset creation and robust neuroimaging analysis. Code is available at https://github.com/jhuldr/USB.


Deformation-aware Temporal Generation for Early Prediction of Alzheimers Disease

Honga, Xin, Lin, Jie, Wang, Minghui

arXiv.org Artificial Intelligence

Alzheimer's disease (AD), a degenerative brain condition, can benefit from early prediction to slow its progression. As the disease progresses, patients typically undergo brain atrophy. Current prediction methods for Alzheimers disease largely involve analyzing morphological changes in brain images through manual feature extraction. This paper proposes a novel method, the Deformation-Aware Temporal Generative Network (DATGN), to automate the learning of morphological changes in brain images about disease progression for early prediction. Given the common occurrence of missing data in the temporal sequences of MRI images, DATGN initially interpolates incomplete sequences. Subsequently, a bidirectional temporal deformation-aware module guides the network in generating future MRI images that adhere to the disease's progression, facilitating early prediction of Alzheimer's disease. DATGN was tested for the generation of temporal sequences of future MRI images using the ADNI dataset, and the experimental results are competitive in terms of PSNR and MMSE image quality metrics. Furthermore, when DATGN-generated synthetic data was integrated into the SVM vs. CNN vs. 3DCNN-based classification methods, significant improvements were achieved from 6. 21\% to 16\% in AD vs. NC classification accuracy and from 7. 34\% to 21. 25\% in AD vs. MCI vs. NC classification accuracy. The qualitative visualization results indicate that DATGN produces MRI images consistent with the brain atrophy trend in Alzheimer's disease, enabling early disease prediction.


Learning Neural Representations of Human Cognition across Many fMRI Studies Arthur Mensch

Neural Information Processing Systems

It opens the door to large-scale statistical models. Finding a unified perspective for all available data calls for scalable and automated solutions to an old challenge: how to aggregate heterogeneous information on brain function into a universal cognitive system that relates mental operations/cognitive processes/psychological tasks to brain networks?



Cycle Diffusion Model for Counterfactual Image Generation

Huang, Fangrui, Wang, Alan, Li, Binxu, Trang, Bailey, Yesiloglu, Ridvan, Hua, Tianyu, Peng, Wei, Adeli, Ehsan

arXiv.org Artificial Intelligence

Deep generative models have demonstrated remarkable success in medical image synthesis. However, ensuring conditioning faithfulness and high-quality synthetic images for direct or counterfactual generation remains a challenge. In this work, we introduce a cycle training framework to fine-tune diffusion models for improved conditioning adherence and enhanced synthetic image realism. Our approach, Cycle Diffusion Model (CDM), enforces consistency between generated and original images by incorporating cycle constraints, enabling more reliable direct and counterfactual generation. Experiments on a combined 3D brain MRI dataset (from ABCD, HCP aging & young adults, ADNI, and PPMI) show that our method improves conditioning accuracy and enhances image quality as measured by FID and SSIM. The results suggest that the cycle strategy used in CDM can be an effective method for refining diffusion-based medical image generation, with applications in data augmentation, counterfactual, and disease progression modeling.


Low-Field Magnetic Resonance Image Quality Enhancement using a Conditional Flow Matching Model

Nguyen, Huu Tien, Eldaly, Ahmed Karam

arXiv.org Artificial Intelligence

This paper introduces a novel framework for image quality transfer based on conditional flow matching (CFM). Unlike conventional generative models that rely on iterative sampling or adversarial objectives, CFM learns a continuous flow between a noise distribution and target data distributions through the direct regression of an optimal velocity field. We evaluate this approach in the context of low-field magnetic resonance imaging (LF-MRI), a rapidly emerging modality that offers affordable and portable scanning but suffers from inherently low signal-to-noise ratio and reduced diagnostic quality. Our framework is designed to reconstruct high-field-like MR images from their corresponding low-field inputs, thereby bridging the quality gap without requiring expensive infrastructure. Experiments demonstrate that CFM not only achieves state-of-the-art performance, but also generalizes robustly to both in-distribution and out-of-distribution data. Importantly, it does so while utilizing significantly fewer parameters than competing deep learning methods. These results underline the potential of CFM as a powerful and scalable tool for MRI reconstruction, particularly in resource-limited clinical environments.



IdenBAT: Disentangled Representation Learning for Identity-Preserved Brain Age Transformation

Maeng, Junyeong, Oh, Kwanseok, Jung, Wonsik, Suk, Heung-Il

arXiv.org Artificial Intelligence

Brain aging represents an intrinsic biological phenomenon marked by discernible morphological changes within the human brain Fjell and Walhovd (2010). In the analysis of brain aging using medical imaging, structural magnetic resonance imaging (sMRI) plays a crucial role as they provide detailed insights into age-related variations and assist in accurate assessments of these alterations. Advances in sMRI-based age transformation have especially allowed researchers and clinicians to visualize and quantify patient-specific intricate brain maturation and degeneration patterns, facilitating medical diagnosis advancements. These capabilities can be pivotal for longitudinal studies to track cognitive or health state progressions over time Cole, Ritchie, Bastin, Hernández, Muñoz Maniega, Royle, Corley, Pattie, Harris, Zhang et al. (2018); Huizinga, Poot, Vernooij, Roshchupkin, Bron, Ikram, Rueckert, Niessen, Klein, Initiative et al. (2018), whereas brain age transformation with preserving patient traits remains a formidable challenge. Because most methods even change characteristics unrelated to aging during the transformation process, the crux lies in modeling the aging process without distorting personal identities intrinsic to each subject Xia, Chartsias, Wang, Tsaftaris, Initiative et al. (2021). When the aging model fails to preserve personal properties regarding identity, it may lead to misinterpretations of age-related changes, potentially compromising the accuracy and reliability of diagnostic decisions. Previous brain age transformation studies Huizinga et al. (2018); Zhang, Shi, Wu, Wang, Yap and Shen (2016); Zhao, Adeli, Honnorat, Leng and Pohl (2019); Lorenzi, Pennec, Frisoni, Ayache, Initiative et al. (2015); Sivera, Delingette, Lorenzi, Pennec, Ayache, Initiative et al. (2019) have often relied on prototype-based strategies that compare averaged brain patterns across different age groups. While these approaches aid in understanding generalized characteristics shared among age groups, they tend to neglect the unique traits of individual subjects. Recently, with the emergence of generative models using longitudinal data Goodfellow, Pouget-Abadie, Mirza, Xu, Warde-Farley, Ozair, Courville and Bengio (2014); Makhzani, Shlens, Jaitly, Goodfellow and Frey (2015), researchers have gained the ability to create more accurate and realistic simulations of brain aging by virtue of the advantages of its data, which comprised MRI scans of the same subject at multiple time points Rachmadi, del C. Valdés-Hernández, Makin,


Learning Neural Representations of Human Cognition across Many fMRI Studies Arthur Mensch Inria

Neural Information Processing Systems

Cognitive neuroscience is enjoying rapid increase in extensive public brain-imaging datasets. It opens the door to large-scale statistical models. Finding a unified perspective for all available data calls for scalable and automated solutions to an old challenge: how to aggregate heterogeneous information on brain function into a universal cognitive system that relates mental operations/cognitive processes/psychological tasks to brain networks? We cast this challenge in a machine-learning approach to predict conditions from statistical brain maps across different studies. For this, we leverage multi-task learning and multi-scale dimension reduction to learn low-dimensional representations of brain images that carry cognitive information and can be robustly associated with psychological stimuli.


Geometric Transformation Uncertainty for Improving 3D Fetal Brain Pose Prediction from Freehand 2D Ultrasound Videos

Ramesh, Jayroop, Dinsdale, Nicola K, Consortium, the INTERGROWTH-21st, Yeung, Pak-Hei, Namburete, Ana IL

arXiv.org Artificial Intelligence

Accurately localizing two-dimensional (2D) ultrasound (US) fetal brain images in the 3D brain, using minimal computational resources, is an important task for automated US analysis of fetal growth and development. We propose an uncertainty-aware deep learning model for automated 3D plane localization in 2D fetal brain images. Specifically, a multi-head network is trained to jointly regress 3D plane pose from 2D images in terms of different geometric transformations. The model explicitly learns to predict uncertainty to allocate higher weight to inputs with low variances across different transformations to improve performance. Our proposed method, QAERTS, demonstrates superior pose estimation accuracy than the state-of-the-art and most of the uncertainty-based approaches, leading to 9% improvement on plane angle (PA) for localization accuracy, and 8% on normalized cross-correlation (NCC) for sampled image quality. QAERTS also demonstrates efficiency, containing 5$\times$ fewer parameters than ensemble-based approach, making it advantageous in resource-constrained settings. In addition, QAERTS proves to be more robust to noise effects observed in freehand US scanning by leveraging rotational discontinuities and explicit output uncertainties.