Goto

Collaborating Authors

Montage based 3D Medical Image Retrieval from Traumatic Brain Injury Cohort using Deep Convolutional Neural Network

arXiv.org Machine Learning

Brain imaging analysis on clinically acquired computed tomography (CT) is essential for the diagnosis, risk prediction of progression, and treatment of the structural phenotypes of traumatic brain injury (TBI). However, in real clinical imaging scenarios, entire body CT images (e.g., neck, abdomen, chest, pelvis) are typically captured along with whole brain CT scans. For instance, in a typical sample of clinical TBI imaging cohort, only ~15% of CT scans actually contain whole brain CT images suitable for volumetric brain analyses; the remaining are partial brain or non-brain images. Therefore, a manual image retrieval process is typically required to isolate the whole brain CT scans from the entire cohort. However, the manual image retrieval is time and resource consuming and even more difficult for the larger cohorts. To alleviate the manual efforts, in this paper we propose an automated 3D medical image retrieval pipeline, called deep montage-based image retrieval (dMIR), which performs classification on 2D montage images via a deep convolutional neural network. The novelty of the proposed method for image processing is to characterize the medical image retrieval task based on the montage images. In a cohort of 2000 clinically acquired TBI scans, 794 scans were used as training data, 206 scans were used as validation data, and the remaining 1000 scans were used as testing data. The proposed achieved accuracy=1.0, recall=1.0, precision=1.0, f1=1.0 for validation data, while achieved accuracy=0.988, recall=0.962, precision=0.962, f1=0.962 for testing data. Thus, the proposed dMIR is able to perform accurate CT whole brain image retrieval from large-scale clinical cohorts.


From one brain scan, more information for medical artificial intelligence

#artificialintelligence

MIT researchers have developed a system that gleans far more labeled training data from unlabeled data, which could help machine-learning models better detect structural patterns in brain scans associated with neurological diseases. The system learns structural and appearance variations in unlabeled scans, and uses that information to shape and mold one labeled scan into thousands of new, distinct labeled scans. System helps machine-learning models glean training information for diagnosing and treating brain conditions. MIT researchers have devised a novel method to glean more information from images used to train machine-learning models, including those that can analyse medical scans to help diagnose and treat brain conditions. An active new area in medicine involves training deep-learning models to detect structural patterns in brain scans associated with neurological diseases and disorders, such as Alzheimer's disease and multiple sclerosis.


New technique makes brain scans better

#artificialintelligence

People who suffer a stroke often undergo a brain scan at the hospital, allowing doctors to determine the location and extent of the damage. Researchers who study the effects of strokes would love to be able to analyze these images, but the resolution is often too low for many analyses. To help scientists take advantage of this untapped wealth of data from hospital scans, a team of MIT researchers, working with doctors at Massachusetts General Hospital and many other institutions, has devised a way to boost the quality of these scans so they can be used for large-scale studies of how strokes affect different people and how they respond to treatment. "These images are quite unique because they are acquired in routine clinical practice when a patient comes in with a stroke," says Polina Golland, an MIT professor of electrical engineering and computer science. Using these scans, researchers could study how genetic factors influence stroke survival or how people respond to different treatments.


@Radiology_AI

#artificialintelligence

To determine whether deep learning algorithms developed in a public competition could identify lung cancer on low-dose CT scans with a performance similar to that of radiologists. In this retrospective study, a dataset consisting of 300 patient scans was used for model assessment; 150 patient scans were from the competition set and 150 were from an independent dataset. Both test datasets contained 50 cancer-positive scans and 100 cancer-negative scans. The reference standard was set by histopathologic examination for cancer-positive scans and imaging follow-up for at least 2 years for cancer-negative scans. The test datasets were applied to the three top-performing algorithms from the Kaggle Data Science Bowl 2017 public competition: grt123, Julian de Wit and Daniel Hammack (JWDH), and Aidence.


Neural Network Compares Brain MRIs in a Flash NVIDIA Blog

#artificialintelligence

Binge-watching three seasons of "The Office" can make you feel as if your brain has turned into mush. But in actuality, the brain is always pretty mushy and malleable -- making neurosurgery even more difficult than it sounds. To gauge their success, brain surgeons compare MRI scans taken before and after the procedure to determine whether a tumor has been fully removed. This process takes time, so if an MRI is being taken mid-operation, the doctor must compare scans by eye. But the brain shifts around during surgery, making that task difficult to accomplish, but no less critical.