Goto

Collaborating Authors

Imaging


Machine learning algorithm to diagnose deep vein thrombosis

#artificialintelligence

A team of researchers are developing the use of an artificial intelligence (AI) algorithm with the aim of diagnosing deep vein thrombosis (DVT) more quickly and as effectively as traditional radiologist-interpreted diagnostic scans, potentially cutting down long patient waiting lists and avoiding patients unnecessarily receiving drugs to treat DVT when they don't have it. DVT is a type of blood clot most commonly formed in the leg, causing swelling, pain and discomfort--if left untreated, it can lead to fatal blood clots in the lungs. Researchers at Oxford University, Imperial College and the University of Sheffield collaborated with the tech company ThinkSono (which is led by Fouad Al-Noor and Sven Mischkewitz), to train a machine learning AI algorithm (AutoDVT) to distinguish patients who had DVT from those without DVT. The AI algorithm accurately diagnosed DVT when compared to the gold standard ultrasound scan, and the team worked out that using the algorithm could potentially save health services $150 per examination. "Traditionally, DVT diagnoses need a specialist ultrasound scan performed by a trained radiographer, and we have found that the preliminary data using the AI algorithm coupled to a hand-held ultrasound machine shows promising results," said study lead Dr. Nicola Curry, a researcher at Oxford University's Radcliffe Department of Medicine and clinician at Oxford University Hospitals NHS Foundation Trust.


@Radiology_AI

#artificialintelligence

On October 5, 2020, the Medical Image Computing and Computer Assisted Intervention Society (MICCAI) 2020 conference hosted a virtual panel discussion with members of the Machine Learning Steering Subcommittee of the Radiological Society of North America. The MICCAI Society brings together scientists, engineers, physicians, educators, and students from around the world. Both societies share a vision to develop radiologic and medical imaging techniques through advanced quantitative imaging biomarkers and artificial intelligence. The panel elaborated on how collaborations between radiologists and machine learning scientists facilitate the creation and clinical success of imaging technology for radiology. This report presents structured highlights of the moderated dialogue at the panel.


Pneumonia Detection:

#artificialintelligence

Build a deep learning model that can detect Pneumonia from patients' chest X-Ray images. Below is the high-level approach on how I created the deep learning model. I first collected the data from Kaggle, which are chest X-Ray images of patients from China. Then, I moved onto creating the architecture of convolutional neural network (CNN) model, which is a type of deep learning model. Data, obtained from Kaggle, contains 5,856 chest X-Ray images of pediatric patients under age of 5 from a medical center in Guangzhou, China.


Google's new deep learning system can give a boost to radiologists

#artificialintelligence

This article is part of our reviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence. Deep learning can detect abnormal chest x-rays with accuracy that matches that of professional radiologists, according to a new paper by a team of AI researchers at Google published in the peer-reviewed science journal Nature. The deep learning system can help radiologists prioritize chest x-rays, and it can also serve as a first response tool in emergency settings where experienced radiologists are not available. The findings show that, while deep learning is not close to replacing radiologists, it can help boost their productivity at a time that the world is facing a severe shortage of medical experts. The paper also shows how far the AI research community has come to build processes that can reduce the risks of deep learning models and create work that can be further built on in the future.


Deep learning model classifies brain tumors with single MRI scan

#artificialintelligence

"This is the first study to address the most common intracranial tumors and to directly determine the tumor class or the absence of tumor from a 3D MRI volume," said Satrajit Chakrabarty, M.S., a doctoral student under the direction of Aristeidis Sotiras, Ph.D., and Daniel Marcus, Ph.D., in Mallinckrodt Institute of Radiology's Computational Imaging Lab at Washington University School of Medicine in St. Louis, Missouri. The six most common intracranial tumor types are high-grade glioma, low-grade glioma, brain metastases, meningioma, pituitary adenoma and acoustic neuroma. Each was documented through histopathology, which requires surgically removing tissue from the site of a suspected cancer and examining it under a microscope. "Non-invasive MRI may be used as a complement, or in some cases, as an alternative to histopathologic examination," he said. To build their machine learning model, called a convolutional neural network, Chakrabarty and researchers from Mallinckrodt Institute of Radiology developed a large, multi-institutional dataset of intracranial 3D MRI scans from four publicly available sources.


Pinaki Laskar on LinkedIn: #Futureofwork #Machinelearning #Computervision

#artificialintelligence

AI Researcher, Cognitive Technologist Inventor - AI Thinking, Think Chain Innovator - AIOT, XAI, Autonomous Cars, IIOT Founder Fisheyebox Spatial Computing Savant, Transformative Leader, Industry X.0 Practitioner BCI capture a user's brain activity and translate it into commands for an external application. What types of brain's signal BCI is acquiring? The system can use any brain's electrical signals measured by applications on the scalp, on the cortical surface, or in the cortex to control external application. The most researched signals are: Electrical and magnetic signals of brain's activity captured by the intracortical electrode array, electrocorticography (ECoG), electroencephalography (EEG), magnetoencephalography (MEG) techniques. Metabolic signals measuring blood flow in the brain acquired by functional magnetic resonance imaging (fMRI) or functional near-infrared imaging (fNIRS) techniques.


Industry news in brief

#artificialintelligence

The latest Digital Health News industry round up includes news on an automated recruitment platform for clinical studies, an acquisition in the medical imaging field and an Australian company focused on measuring coding launching into the UK. Former NHS leader, Tim Kelsey, has launched an international division of Beamtree into the UK – an Australian company that focuses on measuring coding and the quality of hospital care. Kelsey leads the Australian company, but the new London-based arm will be led by coding policy expert Jennifer Nobbs and former Paterson Inquiry advisor Alex Kafetz. Beamtree works with health organisations around the world in a bid to improve the capture, management and leverage of human expertise. The UK office will focus on AI in health, clinical decision support, data quality and analytics supporting better health outcomes.


Rethink: Healthcare on the 'brink of a major redesign'

#artificialintelligence

Healthcare is on the brink of a major redesign that will give you access to more personalised, precise and effective care. For years now, growing ageing populations and the increase in chronic illness have created a pressing need to rethink the delivery of healthcare. While new digital technologies have offered answers to reduce the growing pressure and help transform health systems, widespread adoption of these technologies has often been slow. The pandemic, however, has shown that adoption can move much faster. As lockdown restrictions were introduced, chances are you started consulting your general practitioner or medical specialist via messaging app or video call – services falling under what is referred to as virtual care.


Google Uses Apollo Hospitals' X-Ray Data To Identify Chest Abnormalities

#artificialintelligence

Lately, artificial intelligence and machine learning are being intensively adopted by the healthcare industry. In fact, since the beginning of the pandemic, the medical imaging space has been leveraging technologies for the rapid detection of COVID-19 among patients. Even before the unprecedented times set in, algorithms had been developed to detect chest-related conditions, including tuberculosis and lung cancer. However, the capabilities of these technologies and algorithms were all limited to general clinical settings -- at times where there could be chances of multiple abnormalities. For instance, a system meant to detect pneumothorax may not be expected to highlight nodules suggestive of cancer.


@Radiology_AI

#artificialintelligence

See also article by Sveinsson et al in this issue. Paul H. Yi, MD, was a musculoskeletal radiology fellow at Johns Hopkins Hospital and is affiliate faculty at the Malone Center for Engineering in Healthcare. His research focuses on application and limitations of deep learning in radiology, including the potential for algorithmic bias. He serves on the RSNA Machine Learning Steering Subcommittee and the trainee editorial board of Radiology: Artificial Intelligence and is the journal's podcast co-host. In July 2021, Dr Yi joined the radiology faculty at the University of Maryland and serves as director of the University of Maryland Intelligent Medical Imaging Center.