Goto

Collaborating Authors

Diagnostic Medicine


Machine learning algorithm to diagnose deep vein thrombosis

#artificialintelligence

A team of researchers are developing the use of an artificial intelligence (AI) algorithm with the aim of diagnosing deep vein thrombosis (DVT) more quickly and as effectively as traditional radiologist-interpreted diagnostic scans, potentially cutting down long patient waiting lists and avoiding patients unnecessarily receiving drugs to treat DVT when they don't have it. DVT is a type of blood clot most commonly formed in the leg, causing swelling, pain and discomfort--if left untreated, it can lead to fatal blood clots in the lungs. Researchers at Oxford University, Imperial College and the University of Sheffield collaborated with the tech company ThinkSono (which is led by Fouad Al-Noor and Sven Mischkewitz), to train a machine learning AI algorithm (AutoDVT) to distinguish patients who had DVT from those without DVT. The AI algorithm accurately diagnosed DVT when compared to the gold standard ultrasound scan, and the team worked out that using the algorithm could potentially save health services $150 per examination. "Traditionally, DVT diagnoses need a specialist ultrasound scan performed by a trained radiographer, and we have found that the preliminary data using the AI algorithm coupled to a hand-held ultrasound machine shows promising results," said study lead Dr. Nicola Curry, a researcher at Oxford University's Radcliffe Department of Medicine and clinician at Oxford University Hospitals NHS Foundation Trust.


@Radiology_AI

#artificialintelligence

On October 5, 2020, the Medical Image Computing and Computer Assisted Intervention Society (MICCAI) 2020 conference hosted a virtual panel discussion with members of the Machine Learning Steering Subcommittee of the Radiological Society of North America. The MICCAI Society brings together scientists, engineers, physicians, educators, and students from around the world. Both societies share a vision to develop radiologic and medical imaging techniques through advanced quantitative imaging biomarkers and artificial intelligence. The panel elaborated on how collaborations between radiologists and machine learning scientists facilitate the creation and clinical success of imaging technology for radiology. This report presents structured highlights of the moderated dialogue at the panel.


Pneumonia Detection:

#artificialintelligence

Build a deep learning model that can detect Pneumonia from patients' chest X-Ray images. Below is the high-level approach on how I created the deep learning model. I first collected the data from Kaggle, which are chest X-Ray images of patients from China. Then, I moved onto creating the architecture of convolutional neural network (CNN) model, which is a type of deep learning model. Data, obtained from Kaggle, contains 5,856 chest X-Ray images of pediatric patients under age of 5 from a medical center in Guangzhou, China.


Google's new deep learning system can give a boost to radiologists

#artificialintelligence

This article is part of our reviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence. Deep learning can detect abnormal chest x-rays with accuracy that matches that of professional radiologists, according to a new paper by a team of AI researchers at Google published in the peer-reviewed science journal Nature. The deep learning system can help radiologists prioritize chest x-rays, and it can also serve as a first response tool in emergency settings where experienced radiologists are not available. The findings show that, while deep learning is not close to replacing radiologists, it can help boost their productivity at a time that the world is facing a severe shortage of medical experts. The paper also shows how far the AI research community has come to build processes that can reduce the risks of deep learning models and create work that can be further built on in the future.


promise of artificial intelligence: a review of the opportunities and challenges of artificial intelligence in healthcare

#artificialintelligence

The first barrier is data availability. ML and deep learning models require large datasets to accurately classify or predict different tasks.27 Sectors where ML has seen immense progression are those with large datasets available to enable more complex, precise algorithms.28 In healthcare, however, the availability of data is a complex issue. On the organizational level, health data is not only expensive,27 but there is ingrained reluctance towards data sharing between hospitals as they are considered the property of each hospital to manage their individual patients.29


Deep learning model classifies brain tumors with single MRI scan

#artificialintelligence

"This is the first study to address the most common intracranial tumors and to directly determine the tumor class or the absence of tumor from a 3D MRI volume," said Satrajit Chakrabarty, M.S., a doctoral student under the direction of Aristeidis Sotiras, Ph.D., and Daniel Marcus, Ph.D., in Mallinckrodt Institute of Radiology's Computational Imaging Lab at Washington University School of Medicine in St. Louis, Missouri. The six most common intracranial tumor types are high-grade glioma, low-grade glioma, brain metastases, meningioma, pituitary adenoma and acoustic neuroma. Each was documented through histopathology, which requires surgically removing tissue from the site of a suspected cancer and examining it under a microscope. "Non-invasive MRI may be used as a complement, or in some cases, as an alternative to histopathologic examination," he said. To build their machine learning model, called a convolutional neural network, Chakrabarty and researchers from Mallinckrodt Institute of Radiology developed a large, multi-institutional dataset of intracranial 3D MRI scans from four publicly available sources.


Pinaki Laskar on LinkedIn: #Futureofwork #Machinelearning #Computervision

#artificialintelligence

AI Researcher, Cognitive Technologist Inventor - AI Thinking, Think Chain Innovator - AIOT, XAI, Autonomous Cars, IIOT Founder Fisheyebox Spatial Computing Savant, Transformative Leader, Industry X.0 Practitioner BCI capture a user's brain activity and translate it into commands for an external application. What types of brain's signal BCI is acquiring? The system can use any brain's electrical signals measured by applications on the scalp, on the cortical surface, or in the cortex to control external application. The most researched signals are: Electrical and magnetic signals of brain's activity captured by the intracortical electrode array, electrocorticography (ECoG), electroencephalography (EEG), magnetoencephalography (MEG) techniques. Metabolic signals measuring blood flow in the brain acquired by functional magnetic resonance imaging (fMRI) or functional near-infrared imaging (fNIRS) techniques.


Industry news in brief

#artificialintelligence

The latest Digital Health News industry round up includes news on an automated recruitment platform for clinical studies, an acquisition in the medical imaging field and an Australian company focused on measuring coding launching into the UK. Former NHS leader, Tim Kelsey, has launched an international division of Beamtree into the UK – an Australian company that focuses on measuring coding and the quality of hospital care. Kelsey leads the Australian company, but the new London-based arm will be led by coding policy expert Jennifer Nobbs and former Paterson Inquiry advisor Alex Kafetz. Beamtree works with health organisations around the world in a bid to improve the capture, management and leverage of human expertise. The UK office will focus on AI in health, clinical decision support, data quality and analytics supporting better health outcomes.


Survey XII: What Is the Future of Ethical AI Design? – Imagining the Internet

#artificialintelligence

Results released June 16, 2021 – Pew Research Center and Elon University's Imagining the Internet Center asked experts where they thought efforts aimed at ethical artificial intelligence design would stand in the year 2030. Some 602 technology innovators, developers, business and policy leaders, researchers and activists responded to this specific question. The Question – Regarding the application of AI Ethics by 2030: In recent years, there have been scores of convenings and even more papers generated proposing ethical frameworks for the application of artificial intelligence (AI). They cover a host of issues including transparency, justice and fairness, privacy, freedom and human autonomy, beneficence and non-maleficence, freedom, trust, sustainability and dignity. Our questions here seek your predictions about the possibilities for such efforts. By 2030, will most of the AI systems being used by organizations of all sorts employ ethical principles focused primarily on the public ...


Rethink: Healthcare on the 'brink of a major redesign'

#artificialintelligence

Healthcare is on the brink of a major redesign that will give you access to more personalised, precise and effective care. For years now, growing ageing populations and the increase in chronic illness have created a pressing need to rethink the delivery of healthcare. While new digital technologies have offered answers to reduce the growing pressure and help transform health systems, widespread adoption of these technologies has often been slow. The pandemic, however, has shown that adoption can move much faster. As lockdown restrictions were introduced, chances are you started consulting your general practitioner or medical specialist via messaging app or video call – services falling under what is referred to as virtual care.