Sowmya, Arcot
Multi-omics data integration for early diagnosis of hepatocellular carcinoma (HCC) using machine learning
Spooner, Annette, Moridani, Mohammad Karimi, Safarchi, Azadeh, Maher, Salim, Vafaee, Fatemeh, Zekry, Amany, Sowmya, Arcot
The complementary information found in different modalities of patient data can aid in more accurate modelling of a patient's disease state and a better understanding of the underlying biological processes of a disease. However, the analysis of multi-modal, multi-omics data presents many challenges, including high dimensionality and varying size, statistical distribution, scale and signal strength between modalities. In this work we compare the performance of a variety of ensemble machine learning algorithms that are capable of late integration of multi-class data from different modalities. The ensemble methods and their variations tested were i) a voting ensemble, with hard and soft vote, ii) a meta learner, iii) a multi-modal Adaboost model using a hard vote, a soft vote and a meta learner to integrate the modalities on each boosting round, the PB-MVBoost model and a novel application of a mixture of experts model. These were compared to simple concatenation as a baseline. We examine these methods using data from an in-house study on hepatocellular carcinoma (HCC), along with four validation datasets on studies from breast cancer and irritable bowel disease (IBD). Using the area under the receiver operating curve as a measure of performance we develop models that achieve a performance value of up to 0.85 and find that two boosted methods, PB-MVBoost and Adaboost with a soft vote were the overall best performing models. We also examine the stability of features selected, and the size of the clinical signature determined. Finally, we provide recommendations for the integration of multi-modal multi-class data.
BioFusionNet: Deep Learning-Based Survival Risk Stratification in ER+ Breast Cancer Through Multifeature and Multimodal Data Fusion
Mondol, Raktim Kumar, Millar, Ewan K. A., Sowmya, Arcot, Meijering, Erik
Breast cancer is a significant health concern affecting millions of women worldwide. Accurate survival risk stratification plays a crucial role in guiding personalised treatment decisions and improving patient outcomes. Here we present BioFusionNet, a deep learning framework that fuses image-derived features with genetic and clinical data to achieve a holistic patient profile and perform survival risk stratification of ER+ breast cancer patients. We employ multiple self-supervised feature extractors, namely DINO and MoCoV3, pretrained on histopathology patches to capture detailed histopathological image features. We then utilise a variational autoencoder (VAE) to fuse these features, and harness the latent space of the VAE to feed into a self-attention network, generating patient-level features. Next, we develop a co-dual-cross-attention mechanism to combine the histopathological features with genetic data, enabling the model to capture the interplay between them. Additionally, clinical data is incorporated using a feed-forward network (FFN), further enhancing predictive performance and achieving comprehensive multimodal feature integration. Furthermore, we introduce a weighted Cox loss function, specifically designed to handle imbalanced survival data, which is a common challenge in the field. The proposed model achieves a mean concordance index (C-index) of 0.77 and a time-dependent area under the curve (AUC) of 0.84, outperforming state-of-the-art methods. It predicts risk (high versus low) with prognostic significance for overall survival (OS) in univariate analysis (HR=2.99, 95% CI: 1.88--4.78, p<0.005), and maintains independent significance in multivariate analysis incorporating standard clinicopathological variables (HR=2.91, 95% CI: 1.80--4.68, p<0.005). The proposed method not only improves model performance but also addresses a critical gap in handling imbalanced data.
Automatic 3D Multi-modal Ultrasound Segmentation of Human Placenta using Fusion Strategies and Deep Learning
Singh, Sonit, Stevenson, Gordon, Mein, Brendan, Welsh, Alec, Sowmya, Arcot
The placenta has roles in fetal growth and development, oxygenation and nutrition, synthesising vital substances for pregnancy maintenance, including estrogen, progesterone, cytokines, and growth factors, and acting as a barrier against pathogens and drugs. Placental dysfunction is a leading cause of perinatal morbidity and mortality, including fetal growth restriction (FGR), pre-eclampsia, and stillbirth [1]. The in vivo assessment of placenta across gestation is critical to understand placental structure, function, and development and to identify strategies to optimise pregnancy outcome [2]. The primary modality for placental evaluation is two-dimensional (2D) ultrasound (US), which is non-invasive, inexpensive and more easily acceptable and accessible than other imaging modalities such as X-ray or Magnetic Resonance Imaging (MRI). It may be used to characterize location, shape, and volume of the placenta along with its interface with the endometrium and myometrium. Three-dimensional Power Doppler (PD) ultrasound permits direct visualisation of multi-directional placental vascularity, allowing assessment of both the uteroplacental and fetoplacental circulations, providing dynamic assessment of blood flow for imaging of abnormalities of the placenta. In three-dimensional (3D) ultrasound, a process called semantic segmentation could be used to separate the placenta for qualitative and quantitative analysis. Placenta segmentation is challenging because of its geometry, position, and appearance as the shape and location of placentas vary greatly across subjects [3] and fetal position can lead to shadowing artefacts. Determination of the placental boundary in relation to the uterine tissue is also challenging due to similar appearances [4], irregularity of boundary and changing size and shape with gestation, posing problems for segmentation [5]. Figure 1 shows 3D ultrasound providing placental visualisation in axial, coronal, and sagittal Springer Nature 2021 L
MedShapeNet -- A Large-Scale Dataset of 3D Medical Shapes for Computer Vision
Li, Jianning, Zhou, Zongwei, Yang, Jiancheng, Pepe, Antonio, Gsaxner, Christina, Luijten, Gijs, Qu, Chongyu, Zhang, Tiezheng, Chen, Xiaoxi, Li, Wenxuan, Wodzinski, Marek, Friedrich, Paul, Xie, Kangxian, Jin, Yuan, Ambigapathy, Narmada, Nasca, Enrico, Solak, Naida, Melito, Gian Marco, Vu, Viet Duc, Memon, Afaque R., Schlachta, Christopher, De Ribaupierre, Sandrine, Patel, Rajnikant, Eagleson, Roy, Chen, Xiaojun, Mächler, Heinrich, Kirschke, Jan Stefan, de la Rosa, Ezequiel, Christ, Patrick Ferdinand, Li, Hongwei Bran, Ellis, David G., Aizenberg, Michele R., Gatidis, Sergios, Küstner, Thomas, Shusharina, Nadya, Heller, Nicholas, Andrearczyk, Vincent, Depeursinge, Adrien, Hatt, Mathieu, Sekuboyina, Anjany, Löffler, Maximilian, Liebl, Hans, Dorent, Reuben, Vercauteren, Tom, Shapey, Jonathan, Kujawa, Aaron, Cornelissen, Stefan, Langenhuizen, Patrick, Ben-Hamadou, Achraf, Rekik, Ahmed, Pujades, Sergi, Boyer, Edmond, Bolelli, Federico, Grana, Costantino, Lumetti, Luca, Salehi, Hamidreza, Ma, Jun, Zhang, Yao, Gharleghi, Ramtin, Beier, Susann, Sowmya, Arcot, Garza-Villarreal, Eduardo A., Balducci, Thania, Angeles-Valdez, Diego, Souza, Roberto, Rittner, Leticia, Frayne, Richard, Ji, Yuanfeng, Ferrari, Vincenzo, Chatterjee, Soumick, Dubost, Florian, Schreiber, Stefanie, Mattern, Hendrik, Speck, Oliver, Haehn, Daniel, John, Christoph, Nürnberger, Andreas, Pedrosa, João, Ferreira, Carlos, Aresta, Guilherme, Cunha, António, Campilho, Aurélio, Suter, Yannick, Garcia, Jose, Lalande, Alain, Vandenbossche, Vicky, Van Oevelen, Aline, Duquesne, Kate, Mekhzoum, Hamza, Vandemeulebroucke, Jef, Audenaert, Emmanuel, Krebs, Claudia, van Leeuwen, Timo, Vereecke, Evie, Heidemeyer, Hauke, Röhrig, Rainer, Hölzle, Frank, Badeli, Vahid, Krieger, Kathrin, Gunzer, Matthias, Chen, Jianxu, van Meegdenburg, Timo, Dada, Amin, Balzer, Miriam, Fragemann, Jana, Jonske, Frederic, Rempe, Moritz, Malorodov, Stanislav, Bahnsen, Fin H., Seibold, Constantin, Jaus, Alexander, Marinov, Zdravko, Jaeger, Paul F., Stiefelhagen, Rainer, Santos, Ana Sofia, Lindo, Mariana, Ferreira, André, Alves, Victor, Kamp, Michael, Abourayya, Amr, Nensa, Felix, Hörst, Fabian, Brehmer, Alexander, Heine, Lukas, Hanusrichter, Yannik, Weßling, Martin, Dudda, Marcel, Podleska, Lars E., Fink, Matthias A., Keyl, Julius, Tserpes, Konstantinos, Kim, Moon-Sung, Elhabian, Shireen, Lamecker, Hans, Zukić, Dženan, Paniagua, Beatriz, Wachinger, Christian, Urschler, Martin, Duong, Luc, Wasserthal, Jakob, Hoyer, Peter F., Basu, Oliver, Maal, Thomas, Witjes, Max J. H., Schiele, Gregor, Chang, Ti-chiun, Ahmadi, Seyed-Ahmad, Luo, Ping, Menze, Bjoern, Reyes, Mauricio, Deserno, Thomas M., Davatzikos, Christos, Puladi, Behrus, Fua, Pascal, Yuille, Alan L., Kleesiek, Jens, Egger, Jan
Prior to the deep learning era, shape was commonly used to describe the objects. Nowadays, state-of-the-art (SOTA) algorithms in medical imaging are predominantly diverging from computer vision, where voxel grids, meshes, point clouds, and implicit surface models are used. This is seen from numerous shape-related publications in premier vision conferences as well as the growing popularity of ShapeNet (about 51,300 models) and Princeton ModelNet (127,915 models). For the medical domain, we present a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D models of surgical instrument, called MedShapeNet, created to facilitate the translation of data-driven vision algorithms to medical applications and to adapt SOTA vision algorithms to medical problems. As a unique feature, we directly model the majority of shapes on the imaging data of real patients. As of today, MedShapeNet includes 23 dataset with more than 100,000 shapes that are paired with annotations (ground truth). Our data is freely accessible via a web interface and a Python application programming interface (API) and can be used for discriminative, reconstructive, and variational benchmarks as well as various applications in virtual, augmented, or mixed reality, and 3D printing. Exemplary, we present use cases in the fields of classification of brain tumors, facial and skull reconstructions, multi-class anatomy completion, education, and 3D printing. In future, we will extend the data and improve the interfaces. The project pages are: https://medshapenet.ikim.nrw/ and https://github.com/Jianningli/medshapenet-feedback
Attention and Pooling based Sigmoid Colon Segmentation in 3D CT images
Rahman, Md Akizur, Singh, Sonit, Shanmugalingam, Kuruparan, Iyer, Sankaran, Blair, Alan, Ravindran, Praveen, Sowmya, Arcot
Segmentation of the sigmoid colon is a crucial aspect of treating diverticulitis. It enables accurate identification and localisation of inflammation, which in turn helps healthcare professionals make informed decisions about the most appropriate treatment options. This research presents a novel deep learning architecture for segmenting the sigmoid colon from Computed Tomography (CT) images using a modified 3D U-Net architecture. Several variations of the 3D U-Net model with modified hyper-parameters were examined in this study. Pyramid pooling (PyP) and channel-spatial Squeeze and Excitation (csSE) were also used to improve the model performance. The networks were trained using manually annotated sigmoid colon. A five-fold cross-validation procedure was used on a test dataset to evaluate the network's performance. As indicated by the maximum Dice similarity coefficient (DSC) of 56.92+/-1.42%, the application of PyP and csSE techniques improves segmentation precision. We explored ensemble methods including averaging, weighted averaging, majority voting, and max ensemble. The results show that average and majority voting approaches with a threshold value of 0.5 and consistent weight distribution among the top three models produced comparable and optimal results with DSC of 88.11+/-3.52%. The results indicate that the application of a modified 3D U-Net architecture is effective for segmenting the sigmoid colon in Computed Tomography (CT) images. In addition, the study highlights the potential benefits of integrating ensemble methods to improve segmentation precision.
Brain tumour segmentation using cascaded 3D densely-connected U-net
Ghaffari, Mina, Sowmya, Arcot, Oliver, Ruth
Accurate brain tumour segmentation is a crucial step towards improving disease diagnosis and proper treatment planning. In this paper, we propose a deep-learning based method to segment a brain tumour into its subregions: whole tumour, tumour core and enhancing tumour. The proposed architecture is a 3D convolutional neural network based on a variant of the U-Net architecture of Ronneberger et al. [17] with three main modifications: (i) a heavy encoder, light decoder structure using residual blocks (ii) employment of dense blocks instead of skip connections, and (iii) utilization of self-ensembling in the decoder part of the network. The network was trained and tested using two different approaches: a multitask framework to segment all tumour subregions at the same time, and a three-stage cascaded framework to segment one subregion at a time. An ensemble of the results from both frameworks was also computed. To address the class imbalance issue, appropriate patch extraction was employed in a pre-processing step. Connected component analysis was utilized in the post-processing step to reduce the false positive predictions. Experimental results on the BraTS20 validation dataset demonstrates that the proposed model achieved average Dice Scores of 0.90, 0.82, and 0.78 for whole tumour, tumour core and enhancing tumour respectively. Keywords: Brain tumour segmentation, · Multimodal MRI, · Cascaded network, · Densely connected CNN.
Scalable Inference for Nonparametric Hawkes Process Using P\'{o}lya-Gamma Augmentation
Zhou, Feng, Li, Zhidong, Fan, Xuhui, Wang, Yang, Sowmya, Arcot, Chen, Fang
In this paper, we consider the sigmoid Gaussian Hawkes process model: the baseline intensity and triggering kernel of Hawkes process are both modeled as the sigmoid transformation of random trajectories drawn from Gaussian processes (GP). By introducing auxiliary latent random variables (branching structure, P\'{o}lya-Gamma random variables and latent marked Poisson processes), the likelihood is converted to two decoupled components with a Gaussian form which allows for an efficient conjugate analytical inference. Using the augmented likelihood, we derive an expectation-maximization (EM) algorithm to obtain the maximum a posteriori (MAP) estimate. Furthermore, we extend the EM algorithm to an efficient approximate Bayesian inference algorithm: mean-field variational inference. We demonstrate the performance of two algorithms on simulated fictitious data. Experiments on real data show that our proposed inference algorithms can recover well the underlying prompting characteristics efficiently.