Goto

Collaborating Authors

Results


Deep learning framework for prediction of infection severity of COVID-19

#artificialintelligence

With the onset of the COVID-19 pandemic, quantifying the condition of positively diagnosed patients is of paramount importance. Chest CT scans can be used to measure the severity of a lung infection and the isolate involvement sites in order to increase awareness of a patient's disease progression. In this work, we developed a deep learning framework for lung infection severity prediction. To this end, we collected a dataset of 232 chest CT scans and involved two public datasets with an additional 59 scans for our model's training and used two external test sets with 21 scans for evaluation. On an input chest Computer Tomography (CT) scan, our framework, in parallel, performs a lung lobe segmentation utilizing a pre-trained model and infection segmentation using three distinct trained SE-ResNet18 based U-Net models, one for each of the axial, coronal, and sagittal views. By having the lobe and infection segmentation masks, we calculate the infection severity percentage in each lobe and classify that percentage into 6 categories of infection severity score using a k-nearest neighbors (k-NN) model. The lobe segmentation model achieved a Dice Similarity Score (DSC) in the range of [0.918, 0.981] for different lung lobes and our infection segmentation models gained DSC scores of 0.7254 and 0.7105 on our two test sets, respectfully. Similarly, two resident radiologists were assigned the same infection segmentation tasks, for which they obtained a DSC score of 0.7281 and 0.6693 ...


Cloud-based medical imaging reconstruction using deep neural networks

#artificialintelligence

Medical imaging techniques like computed tomography (CT), magnetic resonance imaging (MRI), medical x-ray imaging, ultrasound imaging, and others are commonly used by doctors for various reasons. Some examples include detecting changes in the appearance of organs, tissues, and vessels, and detecting abnormalities such as tumors and various other type of pathologies. Before doctors can use the data from those techniques, the data needs to be transformed from its native raw form to a form that can be displayed as an image on a computer screen. This process is known as image reconstruction, and it plays a crucial role in a medical imaging workflow--it's the step that creates diagnostic images that can be then reviewed by doctors. In this post, we discuss a use case of MRI reconstruction, but the architectural concepts can be applied to other types of image reconstruction.


Revolutionary AI model for cancer diagnosis

#artificialintelligence

Professor Park Sang-Hyun of the Department of Robotics and Mechatronics Engineering has led a research team in developing a weakly supervised deep learning model that can accurately locate cancer in pathological images based only on data where the cancer is present. Existing deep learning models needed to construct a dataset to specify the cancer site whereas the model developed in this study improved efficiency. It is hoped that it will make a significant contribution to the relevant research field. Generally, locating the cancer site involves zoning which is time-consuming and increases cost so the new AI model looks to be promising. The weakly supervised learning model that zones cancer sites with only rough data such as'whether the cancer in the image is present or not' is under active study.


How Fear Restructures the Mouse Brain

#artificialintelligence

Neurons communicate via synapses--tiny, button-like protrusions that sprout from one neuron and connect it to the next. These minuscule structures are thought to be the backbone of learning and memory, changing in strength and number as we learn. At about 1/5,000th the width of a human hair, synapses can be hard to visualize, and researchers are just beginning to develop the tools necessary to do so. In a study published in Cell Reports on August 2, researchers at the Chinese Academy of Sciences and Shanghai University used a combination of deep learning algorithms and high-resolution electron microscopy to map out how frightful experiences rearrange brain connections. They found that when mice learn to fear the sound of a buzzer, neurons in their hippocampus form more connections with other neurons downstream and shuttle more mitochondria to synaptic sites.


Deep Learning, Subtraction Technique Optimal for Coronary Stent Evaluation by CTA

#artificialintelligence

According to ARRS' American Journal of Roentgenology (AJR), the combination of deep-learning reconstruction (DLR) and a subtraction technique yielded optimal diagnostic performance for the detection of in-stent restenosis by coronary CTA. Noting that these findings could guide patient selection for invasive coronary stent evaluation, combining DLR with a two-breath-hold subtraction technique "may help overcome challenges related to stent-related blooming artifact," added corresponding author Yi-Ning Wang from the State Key Laboratory of Complex Severe and Rare Diseases at China's Peking Union Medical College Hospital. Between March 2020 and August 2021, Wang and team studied 30 patients (22 men, 8 women; mean age, 63.6 years) with a total of 59 coronary stents who underwent coronary CTA using the two-breath-hold technique (i.e., noncontrast and contrast-enhanced acquisitions). Conventional and subtraction images were reconstructed for hybrid iterative reconstruction (HIR) and DLR, while maximum visible in-stent lumen diameter was measured. Two readers independently evaluated images for in-stent restenosis ( 50% stenosis).


Deep Learning Applications for Computer Vision

#artificialintelligence

Deep learning has been a game changer in the field of computer vision. It's widely used to teach computers to "see" and analyze the environment similarly to the way humans do. Its applications include self-driven cars, robotics, data analysis, and much more. In this blog post, we will explain in detail the applications of deep learning for computer vision. But before doing that, let's understand what computer vision and deep learning are. Computer vision (CV) is a field of artificial intelligence that enables computers to extract information from images, videos, and other visual sources. As a scientific discipline, computer vision is concerned with the theory behind artificial systems that extract information from images.


AI Integrates Multiple Data Types to Predict Cancer Outcomes

#artificialintelligence

A study by researchers at Brigham and Women's Hospital has demonstrated a proof-of-concept model that uses artificial intelligence (AI) to combine multiple types of data from different sources, and predict patient outcomes for 14 different types of cancer. "This work sets the stage for larger health care AI studies that combine data from multiple sources," said research lead Faisal Mahmood, PhD, an assistant professor in the division of computational pathology at Brigham and associate member of the cancer program at the Broad Institute of Harvard and MIT. "In a broader sense, our findings emphasize a need for building computational pathology prognostic models with much larger datasets and downstream clinical trials to establish utility." Mahmood and colleagues reported on their work in Cancer Cell, in a paper titled, "Pan-cancer integrative histology-genomic analysis via multimodal deep learning." It's long been understood that predicting outcomes in patients with cancer involves considering many different, such as patient history, genes, and disease pathology, clinicians struggle with integrating this information to make decisions about patient care.


Nuclear morphology is a deep learning biomarker of cellular senescence - Nature Aging

#artificialintelligence

Cellular senescence is an important factor in aging and many age-related diseases, but understanding its role in health is challenging due to the lack of exclusive or universal markers. Using neural networks, we predict senescence from the nuclear morphology of human fibroblasts with up to 95% accuracy, and investigate murine astrocytes, murine neurons, and fibroblasts with premature aging in culture. After generalizing our approach, the predictor recognizes higher rates of senescence in p21-positive and ethynyl-2’-deoxyuridine (EdU)-negative nuclei in tissues and shows an increasing rate of senescent cells with age in H&E-stained murine liver tissue and human dermal biopsies. Evaluating medical records reveals that higher rates of senescent cells correspond to decreased rates of malignant neoplasms and increased rates of osteoporosis, osteoarthritis, hypertension and cerebral infarction. In sum, we show that morphological alterations of the nucleus can serve as a deep learning predictor of senescence that is applicable across tissues and species and is associated with health outcomes in humans. Senescent cells are typically identified by a combination of senescence-associated markers, and the phenotype is heterogeneous. Here, using deep neural networks, Heckenbach et al. show that nuclear morphology can be used to predict cellular senescence in images of tissues and cell cultures.


Real-World Machine Learning--PyTorch and Monai for Healthcare Imaging

#artificialintelligence

To improve your skills in machine learning and artificial intelligence, it is important to solve real-world problems. What better problem to solve then helping to save people's lives? Machine learning is being used more and more in the field of healthcare. PyTorch and Monai can be used to discover tumors in livers. We just published a course on the freeCodeCamp.org


#2,673 – AI Week: Human Gaze AI

#artificialintelligence

A new AI system that was trained to mimic human gaze could soon be used to detect cancer. "Being able to focus our attention is an important part of the human visual system, which allows humans to select and interpret the most relevant information in a particular scene. Scientists all over the world have been using computer software to try and recreate this ability to pick out the most salient parts of an image, but with mixed success up until now. In the study, the team used a deep learning computer algorithm known as a convolutional neural network, which is designed to mimic the interconnected web of neurons in the human brain and is modelled specifically on the visual cortex. This type of algorithm is ideal for taking images as an input and being able to assign importance to various objects or aspects within the image itself. According to the team, they utilised a huge database of images in which each image had already been assessed, or viewed, by humans and assigned so-called'areas of interest' using eye-tracking software. These images were then fed into the algorithm and by using deep learning the system slowly began to learn from the images to a point where it could then accurately predict which parts of the image were most salient. Researchers said their system was tested against seven advanced visual saliency systems already in use, and was shown to be'superior on all metrics'. 'Being able to successfully predict where people look in natural images could unlock a wide range of applications from automatic target detection to robotics, image processing and medical diagnostics,' said Dr Hantao Liu, co-author of the study, from Cardiff University's School of Computer Science and Informatics."