Collaborating Authors

Deep-belief networks detect glioblastoma tumors from MRI scans


Scientists from South Ural State University, in collaboration with foreign colleagues, have proposed a new model for the classification of MRI images based on a deep-belief network that will help to detect malignant brain tumors faster and more accurately. The research study was published in the Journal of Big Data, indexed in the scientometric Scopus database. Glioblastoma (GBM) is a stage 4 malignant brain tumor in which a large proportion of tumor cells are reproducing at any given moment. Such tumors are life-threatening and can lead to partial or complete mental and physical disability. The study was carried out by an international group of scientists from Indian universities and South Ural State University.

Prediction of Overall Survival of Brain Tumor Patients Machine Learning

--Automated brain tumor segmentation plays an important role in the diagnosis and prognosis of the patient. The main focus of this paper is to segment tumor from BRA TS 2018 benchmark dataset and use age, shape and volumetric features to predict overall survival of patients. The random forest classifier achieves overall survival accuracy of 59% on the test dataset and 67% on the dataset with resection status as gross total resection. The proposed approach uses fewer features but achieves better accuracy than state-of- the-art methods. Medical fraternity considers brain tumor amongst the most fatal type of cancer [1]. Brain tumors are divided into two categories based on origin and malignancy. Former is further classified as primary and secondary.

MR Imaging–Based Radiomic Signatures of Distinct Molecular Subgroups of Medulloblastoma


BACKGROUND AND PURPOSE: Distinct molecular subgroups of pediatric medulloblastoma confer important differences in prognosis and therapy. Currently, tissue sampling is the only method to obtain information for classification. Our goal was to develop and validate radiomic and machine learning approaches for predicting molecular subgroups of pediatric medulloblastoma. MATERIALS AND METHODS: In this multi-institutional retrospective study, we evaluated MR imaging datasets of 109 pediatric patients with medulloblastoma from 3 children's hospitals from January 2001 to January 2014. A computational framework was developed to extract MR imaging–based radiomic features from tumor segmentations, and we tested 2 predictive models: a double 10-fold cross-validation using a combined dataset consisting of all 3 patient cohorts and a 3-dataset cross-validation, in which training was performed on 2 cohorts and testing was performed on the third independent cohort. We used the Wilcoxon rank sum test for feature selection with assessment of area under the receiver operating characteristic curve to evaluate model performance.

3D AGSE-VNet: An Automatic Brain Tumor MRI Data Segmentation Framework Artificial Intelligence

Background: Glioma is the most common brain malignant tumor, with a high morbidity rate and a mortality rate of more than three percent, which seriously endangers human health. The main method of acquiring brain tumors in the clinic is MRI. Segmentation of brain tumor regions from multi-modal MRI scan images is helpful for treatment inspection, post-diagnosis monitoring, and effect evaluation of patients. However, the common operation in clinical brain tumor segmentation is still manual segmentation, lead to its time-consuming and large performance difference between different operators, a consistent and accurate automatic segmentation method is urgently needed. Methods: To meet the above challenges, we propose an automatic brain tumor MRI data segmentation framework which is called AGSE-VNet. In our study, the Squeeze and Excite (SE) module is added to each encoder, the Attention Guide Filter (AG) module is added to each decoder, using the channel relationship to automatically enhance the useful information in the channel to suppress the useless information, and use the attention mechanism to guide the edge information and remove the influence of irrelevant information such as noise. Results: We used the BraTS2020 challenge online verification tool to evaluate our approach. The focus of verification is that the Dice scores of the whole tumor (WT), tumor core (TC) and enhanced tumor (ET) are 0.68, 0.85 and 0.70, respectively. Conclusion: Although MRI images have different intensities, AGSE-VNet is not affected by the size of the tumor, and can more accurately extract the features of the three regions, it has achieved impressive results and made outstanding contributions to the clinical diagnosis and treatment of brain tumor patients.

Contrastive Representation Learning for Rapid Intraoperative Diagnosis of Skull Base Tumors Imaged Using Stimulated Raman Histology Artificial Intelligence

Background: Accurate diagnosis of skull base tumors is essential for providing personalized surgical treatment strategies. Intraoperative diagnosis can be challenging due to tumor diversity and lack of intraoperative pathology resources. Objective: To develop an independent and parallel intraoperative pathology workflow that can provide rapid and accurate skull base tumor diagnoses using label-free optical imaging and artificial intelligence (AI). Method: We used a fiber laser-based, label-free, non-consumptive, high-resolution microscopy method ($<$ 60 sec per 1 $\times$ 1 mm$^\text{2}$), called stimulated Raman histology (SRH), to image a consecutive, multicenter cohort of skull base tumor patients. SRH images were then used to train a convolutional neural network (CNN) model using three representation learning strategies: cross-entropy, self-supervised contrastive learning, and supervised contrastive learning. Our trained CNN models were tested on a held-out, multicenter SRH dataset. Results: SRH was able to image the diagnostic features of both benign and malignant skull base tumors. Of the three representation learning strategies, supervised contrastive learning most effectively learned the distinctive and diagnostic SRH image features for each of the skull base tumor types. In our multicenter testing set, cross-entropy achieved an overall diagnostic accuracy of 91.5%, self-supervised contrastive learning 83.9%, and supervised contrastive learning 96.6%. Our trained model was able to identify tumor-normal margins and detect regions of microscopic tumor infiltration in whole-slide SRH images. Conclusion: SRH with AI models trained using contrastive representation learning can provide rapid and accurate intraoperative diagnosis of skull base tumors.