Goto

Collaborating Authors

 Suk, Heung-Il


IdenBAT: Disentangled Representation Learning for Identity-Preserved Brain Age Transformation

arXiv.org Artificial Intelligence

Brain aging represents an intrinsic biological phenomenon marked by discernible morphological changes within the human brain Fjell and Walhovd (2010). In the analysis of brain aging using medical imaging, structural magnetic resonance imaging (sMRI) plays a crucial role as they provide detailed insights into age-related variations and assist in accurate assessments of these alterations. Advances in sMRI-based age transformation have especially allowed researchers and clinicians to visualize and quantify patient-specific intricate brain maturation and degeneration patterns, facilitating medical diagnosis advancements. These capabilities can be pivotal for longitudinal studies to track cognitive or health state progressions over time Cole, Ritchie, Bastin, Hernรกndez, Muรฑoz Maniega, Royle, Corley, Pattie, Harris, Zhang et al. (2018); Huizinga, Poot, Vernooij, Roshchupkin, Bron, Ikram, Rueckert, Niessen, Klein, Initiative et al. (2018), whereas brain age transformation with preserving patient traits remains a formidable challenge. Because most methods even change characteristics unrelated to aging during the transformation process, the crux lies in modeling the aging process without distorting personal identities intrinsic to each subject Xia, Chartsias, Wang, Tsaftaris, Initiative et al. (2021). When the aging model fails to preserve personal properties regarding identity, it may lead to misinterpretations of age-related changes, potentially compromising the accuracy and reliability of diagnostic decisions. Previous brain age transformation studies Huizinga et al. (2018); Zhang, Shi, Wu, Wang, Yap and Shen (2016); Zhao, Adeli, Honnorat, Leng and Pohl (2019); Lorenzi, Pennec, Frisoni, Ayache, Initiative et al. (2015); Sivera, Delingette, Lorenzi, Pennec, Ayache, Initiative et al. (2019) have often relied on prototype-based strategies that compare averaged brain patterns across different age groups. While these approaches aid in understanding generalized characteristics shared among age groups, they tend to neglect the unique traits of individual subjects. Recently, with the emergence of generative models using longitudinal data Goodfellow, Pouget-Abadie, Mirza, Xu, Warde-Farley, Ozair, Courville and Bengio (2014); Makhzani, Shlens, Jaitly, Goodfellow and Frey (2015), researchers have gained the ability to create more accurate and realistic simulations of brain aging by virtue of the advantages of its data, which comprised MRI scans of the same subject at multiple time points Rachmadi, del C. Valdรฉs-Hernรกndez, Makin,


FIESTA: Fourier-Based Semantic Augmentation with Uncertainty Guidance for Enhanced Domain Generalizability in Medical Image Segmentation

arXiv.org Artificial Intelligence

Single-source domain generalization (SDG) in medical image segmentation (MIS) aims to generalize a model using data from only one source domain to segment data from an unseen target domain. Despite substantial advances in SDG with data augmentation, existing methods often fail to fully consider the details and uncertain areas prevalent in MIS, leading to mis-segmentation. This paper proposes a Fourier-based semantic augmentation method called FIESTA using uncertainty guidance to enhance the fundamental goals of MIS in an SDG context by manipulating the amplitude and phase components in the frequency domain. The proposed Fourier augmentative transformer addresses semantic amplitude modulation based on meaningful angular points to induce pertinent variations and harnesses the phase spectrum to ensure structural coherence. Moreover, FIESTA employs epistemic uncertainty to fine-tune the augmentation process, improving the ability of the model to adapt to diverse augmented data and concentrate on areas with higher ambiguity. Extensive experiments across three cross-domain scenarios demonstrate that FIESTA surpasses recent state-of-the-art SDG approaches in segmentation performance and significantly contributes to boosting the applicability of the model in medical imaging modalities.


Transferring Ultrahigh-Field Representations for Intensity-Guided Brain Segmentation of Low-Field Magnetic Resonance Imaging

arXiv.org Artificial Intelligence

Ultrahigh-field (UHF) magnetic resonance imaging (MRI), i.e., 7T MRI, provides superior anatomical details of internal brain structures owing to its enhanced signal-to-noise ratio and susceptibility-induced contrast. However, the widespread use of 7T MRI is limited by its high cost and lower accessibility compared to low-field (LF) MRI. This study proposes a deep-learning framework that systematically fuses the input LF magnetic resonance feature representations with the inferred 7T-like feature representations for brain image segmentation tasks in a 7T-absent environment. Specifically, our adaptive fusion module aggregates 7T-like features derived from the LF image by a pre-trained network and then refines them to be effectively assimilable UHF guidance into LF image features. Using intensity-guided features obtained from such aggregation and assimilation, segmentation models can recognize subtle structural representations that are usually difficult to recognize when relying only on LF features. Beyond such advantages, this strategy can seamlessly be utilized by modulating the contrast of LF features in alignment with UHF guidance, even when employing arbitrary segmentation models. Exhaustive experiments demonstrated that the proposed method significantly outperformed all baseline models on both brain tissue and whole-brain segmentation tasks; further, it exhibited remarkable adaptability and scalability by successfully integrating diverse segmentation models and tasks. These improvements were not only quantifiable but also visible in the superlative visual quality of segmentation masks.


KindMed: Knowledge-Induced Medicine Prescribing Network for Medication Recommendation

arXiv.org Artificial Intelligence

Extensive adoption of electronic health records (EHRs) offers opportunities for its use in various clinical analyses. We could acquire more comprehensive insights by enriching an EHR cohort with external knowledge (e.g., standardized medical ontology and wealthy semantics curated on the web) as it divulges a spectrum of informative relations between observed medical codes. This paper proposes a novel Knowledge-Induced Medicine Prescribing Network (KindMed) framework to recommend medicines by inducing knowledge from myriad medical-related external sources upon the EHR cohort, rendering them as medical knowledge graphs (KGs). On top of relation-aware graph representation learning to unravel an adequate embedding of such KGs, we leverage hierarchical sequence learning to discover and fuse clinical and medicine temporal dynamics across patients' historical admissions for encouraging personalized recommendations. In predicting safe, precise, and personalized medicines, we devise an attentive prescribing that accounts for and associates three essential aspects, i.e., a summary of joint historical medical records, clinical condition progression, and the current clinical state of patients. We exhibited the effectiveness of our KindMed on the augmented real-world EHR cohorts, etching leading performances against graph-driven competing baselines.


A Learnable Counter-condition Analysis Framework for Functional Connectivity-based Neurological Disorder Diagnosis

arXiv.org Artificial Intelligence

To understand the biological characteristics of neurological disorders with functional connectivity (FC), recent studies have widely utilized deep learning-based models to identify the disease and conducted post-hoc analyses via explainable models to discover disease-related biomarkers. Most existing frameworks consist of three stages, namely, feature selection, feature extraction for classification, and analysis, where each stage is implemented separately. However, if the results at each stage lack reliability, it can cause misdiagnosis and incorrect analysis in afterward stages. In this study, we propose a novel unified framework that systemically integrates diagnoses (i.e., feature selection and feature extraction) and explanations. Notably, we devised an adaptive attention network as a feature selection approach to identify individual-specific disease-related connections. We also propose a functional network relational encoder that summarizes the global topological properties of FC by learning the inter-network relations without pre-defined edges between functional networks. Last but not least, our framework provides a novel explanatory power for neuroscientific interpretation, also termed counter-condition analysis. We simulated the FC that reverses the diagnostic information (i.e., counter-condition FC): converting a normal brain to be abnormal and vice versa. We validated the effectiveness of our framework by using two large resting-state functional magnetic resonance imaging (fMRI) datasets, Autism Brain Imaging Data Exchange (ABIDE) and REST-meta-MDD, and demonstrated that our framework outperforms other competing methods for disease identification. Furthermore, we analyzed the disease-related neurological patterns based on counter-condition analysis.


Deep Geometric Learning with Monotonicity Constraints for Alzheimer's Disease Progression

arXiv.org Artificial Intelligence

Alzheimer's disease (AD) is a devastating neurodegenerative condition that precedes progressive and irreversible dementia; thus, predicting its progression over time is vital for clinical diagnosis and treatment. Numerous studies have implemented structural magnetic resonance imaging (MRI) to model AD progression, focusing on three integral aspects: (i) temporal variability, (ii) incomplete observations, and (iii) temporal geometric characteristics. However, deep learning-based approaches regarding data variability and sparsity have yet to consider inherent geometrical properties sufficiently. The ordinary differential equation-based geometric modeling method (ODE-RGRU) has recently emerged as a promising strategy for modeling time-series data by intertwining a recurrent neural network and an ODE in Riemannian space. Despite its achievements, ODE-RGRU encounters limitations when extrapolating positive definite symmetric metrics from incomplete samples, leading to feature reverse occurrences that are particularly problematic, especially within the clinical facet. Therefore, this study proposes a novel geometric learning approach that models longitudinal MRI biomarkers and cognitive scores by combining three modules: topological space shift, ODE-RGRU, and trajectory estimation. We have also developed a training algorithm that integrates manifold mapping with monotonicity constraints to reflect measurement transition irreversibility. We verify our proposed method's efficacy by predicting clinical labels and cognitive scores over time in regular and irregular settings. Furthermore, we thoroughly analyze our proposed framework through an ablation study.


A Quantitatively Interpretable Model for Alzheimer's Disease Prediction Using Deep Counterfactuals

arXiv.org Artificial Intelligence

Deep learning (DL) for predicting Alzheimer's disease (AD) has provided timely intervention in disease progression yet still demands attentive interpretability to explain how their DL models make definitive decisions. Recently, counterfactual reasoning has gained increasing attention in medical research because of its ability to provide a refined visual explanatory map. However, such visual explanatory maps based on visual inspection alone are insufficient unless we intuitively demonstrate their medical or neuroscientific validity via quantitative features. In this study, we synthesize the counterfactual-labeled structural MRIs using our proposed framework and transform it into a gray matter density map to measure its volumetric changes over the parcellated region of interest (ROI). We also devised a lightweight linear classifier to boost the effectiveness of constructed ROIs, promoted quantitative interpretation, and achieved comparable predictive performance to DL methods. Throughout this, our framework produces an ``AD-relatedness index'' for each ROI and offers an intuitive understanding of brain status for an individual patient and across patient groups with respect to AD progression.


Deep Efficient Continuous Manifold Learning for Time Series Modeling

arXiv.org Artificial Intelligence

Modeling non-Euclidean data is drawing extensive attention along with the unprecedented successes of deep neural networks in diverse fields. Particularly, a symmetric positive definite matrix is being actively studied in computer vision, signal processing, and medical image analysis, due to its ability to learn beneficial statistical representations. However, owing to its rigid constraints, it remains challenging to optimization problems and inefficient computational costs, especially, when incorporating it with a deep learning framework. In this paper, we propose a framework to exploit a diffeomorphism mapping between Riemannian manifolds and a Cholesky space, by which it becomes feasible not only to efficiently solve optimization problems but also to greatly reduce computation costs. Further, for dynamic modeling of time-series data, we devise a continuous manifold learning method by systematically integrating a manifold ordinary differential equation and a gated recurrent neural network. It is worth noting that due to the nice parameterization of matrices in a Cholesky space, training our proposed network equipped with Riemannian geometric metrics is straightforward. We demonstrate through experiments over regular and irregular time-series datasets that our proposed model can be efficiently and reliably trained and outperforms existing manifold methods and state-of-the-art methods in various time-series tasks.


EAG-RS: A Novel Explainability-guided ROI-Selection Framework for ASD Diagnosis via Inter-regional Relation Learning

arXiv.org Artificial Intelligence

Deep learning models based on resting-state functional magnetic resonance imaging (rs-fMRI) have been widely used to diagnose brain diseases, particularly autism spectrum disorder (ASD). Existing studies have leveraged the functional connectivity (FC) of rs-fMRI, achieving notable classification performance. However, they have significant limitations, including the lack of adequate information while using linear low-order FC as inputs to the model, not considering individual characteristics (i.e., different symptoms or varying stages of severity) among patients with ASD, and the non-explainability of the decision process. To cover these limitations, we propose a novel explainability-guided region of interest (ROI) selection (EAG-RS) framework that identifies non-linear high-order functional associations among brain regions by leveraging an explainable artificial intelligence technique and selects class-discriminative regions for brain disease identification. The proposed framework includes three steps: (i) inter-regional relation learning to estimate non-linear relations through random seed-based network masking, (ii) explainable connection-wise relevance score estimation to explore high-order relations between functional connections, and (iii) non-linear high-order FC-based diagnosis-informative ROI selection and classifier learning to identify ASD. We validated the effectiveness of our proposed method by conducting experiments using the Autism Brain Imaging Database Exchange (ABIDE) dataset, demonstrating that the proposed method outperforms other comparative methods in terms of various evaluation metrics. Furthermore, we qualitatively analyzed the selected ROIs and identified ASD subtypes linked to previous neuroscientific studies.


SADM: Sequence-Aware Diffusion Model for Longitudinal Medical Image Generation

arXiv.org Artificial Intelligence

Human organs constantly undergo anatomical changes due to a complex mix of short-term (e.g., heartbeat) and long-term (e.g., aging) factors. Evidently, prior knowledge of these factors will be beneficial when modeling their future state, i.e., via image generation. However, most of the medical image generation tasks only rely on the input from a single image, thus ignoring the sequential dependency even when longitudinal data is available. Sequence-aware deep generative models, where model input is a sequence of ordered and timestamped images, are still underexplored in the medical imaging domain that is featured by several unique challenges: 1) Sequences with various lengths; 2) Missing data or frame, and 3) High dimensionality. To this end, we propose a sequence-aware diffusion model (SADM) for the generation of longitudinal medical images. Recently, diffusion models have shown promising results in high-fidelity image generation. Our method extends this new technique by introducing a sequence-aware transformer as the conditional module in a diffusion model. The novel design enables learning longitudinal dependency even with missing data during training and allows autoregressive generation of a sequence of images during inference. Our extensive experiments on 3D longitudinal medical images demonstrate the effectiveness of SADM compared with baselines and alternative methods.