Goto

Collaborating Authors

Results


Study Suggests AI Enhances Non-Contrast CT Detection of Large Vessel Occlusion

#artificialintelligence

An emerging artificial intelligence (AI) algorithm may be beneficial in facilitating earlier detection of large vessel occlusions on non-contrast computed tomography (CT) scans and subsequent identification of stroke patients who are good candidates for a minimally invasive thrombectomy. In a study abstract presentation at the Society of Neurointerventional Surgery's (SNIS) 19th Annual Meeting in Toronto, researchers noted that the AI algorithm detects clinical symptoms of ipsiversive gaze deviation on non-contrast CT and was trained with 200 CT scans. In a subsequent study of 116 patients who received endovascular therapy for large vessel occlusions, the study authors found an ipsiversive gaze deviation in 71.1 percent of patients (59 out of 83 patients) with proximal occlusions and the AI algorithm had a 79 percent accuracy rate (47 out of 59 patients) in identifying ipsiversive gaze deviation. The study authors said the AI algorithm could result in more expeditious treatment decisions for patients with acute ischemic stroke. "Simply put, the faster we act, the better our stroke patients' outcomes will be. Our results represent an advance that has the potential to speed up the identification of (large vessel occlusion) stroke during the triage process at the hospital," emphasized lead study author Jason Tarpley, M.D., Ph.D, the stroke medical director at the Pacific Stroke and Aneurysm Center in Santa Monica, Ca.


Improving Accuracy and Efficiency with Concurrent Use of Artificial Intelligence for Digital Breast Tomosynthesis

#artificialintelligence

To evaluate the use of artificial intelligence (AI) to shorten digital breast tomosynthesis (DBT) reading time while maintaining or improving accuracy. A deep learning AI system was developed to identify suspicious soft-tissue and calcified lesions in DBT images. A reader study compared the performance of 24 radiologists (13 of whom were breast subspecialists) reading 260 DBT examinations (including 65 cancer cases) both with and without AI. Readings occurred in two sessions separated by at least 4 weeks. Area under the receiver operating characteristic curve (AUC), reading time, sensitivity, specificity, and recall rate were evaluated with statistical methods for multireader, multicase studies. Radiologist performance for the detection of malignant lesions, measured by mean AUC, increased 0.057 with the use of AI (95% confidence interval [CI]: 0.028, 0.087; P .01), Reading time decreased 52.7% (95% CI: 41.8%, 61.5%; P .01), Sensitivity increased from 77.0% without AI to 85.0% with AI (8.0%; 95% CI: 2.6%, 13.4%; P .01), The concurrent use of an accurate DBT AI system was found to improve cancer detection efficacy in a reader study that demonstrated increases in AUC, sensitivity, and specificity and a reduction in recall rate and reading time. See also the commentary by Hsu and Hoyt in this issue. Reading times were significantly reduced, and sensitivity, specificity, and recall rate improved in a nonclinical reader study when an artificial intelligence system was utilized concurrently with image interpretation for digital breast tomosynthesis.


Research: Artificial intelligence can fuel racial bias in health care, but can mitigate it, too

#artificialintelligence

Artificial intelligence has come to stay in the healthcare industry. The term refers to a constellation of computational tools that can comb through vast troves of data at rates far surpassing human ability, in a way that can streamline providers' jobs. Regardless of the specific type of AI, these tools are generally capable of making a massive, complex industry run more efficiently. But several studies show it can also propagate racial biases, leading to misdiagnosis of medical conditions among people of colour, insufficient treatment of pain, under-prescription of life-affirming medications, and more. Many patients don't even know they've been enrolled in healthcare algorithms that are influencing their care and outcomes.


Using AI to Diagnose Birth Defect in Fetal Ultrasound Images - Neuroscience News

#artificialintelligence

Summary: Using datasets of fetal ultrasounds, a new AI algorithm is able to detect cystic hygroma, a rare embryonic developmental disorder, within the first trimester of pregnancy. In a new proof-of-concept study led by Dr. Mark Walker at the uOttawa Faculty of Medicine, researchers are pioneering the use of a unique AI-based deep learning model as an assistive tool for the rapid and accurate reading of ultrasound images. It's trailblazing work because although deep learning models have become increasingly popular in interpreting medical images and detecting disorders, figuring out how its application can work in obstetric ultrasonography is in its nascent stages. Few AI-enabled studies have been published in this field. The goal of the team's study was to demonstrate the potential for deep-learning architecture to support early and reliable identification of cystic hygroma from first trimester ultrasound scans.


AI system that mimics human gaze could be used to detect cancer

#artificialintelligence

A cutting-edge artificial intelligence (AI) system that can accurately predict the areas of an image where a person is most likely to look has been created by scientists at Cardiff University. Based on the mechanics of the human brain and its ability to distinguish between different parts of an image, the researchers say the novel system more accurately represents human vision than anything that has gone before. Applications of the new system range from robotics, multimedia communication and video surveillance to automated image editing and finding tumors in medical images. The Multimedia Computing Research Group at Cardiff University are now planning to test the system by helping radiologists to find lesions within medical images, with the overall goal of improving the speed, accuracy and sensitivity of medical diagnostics. The system has been presented in the journal Neurocomputing.


Scientists say AI can tell your politics from a brain scan -- we say it's BS

#artificialintelligence

A team of researchers using what they call "state-of-the-art artificial intelligence techniques" has reportedly created a system capable of identifying a human's political ideology by viewing their brain scans. This is either the most advanced AI system in the entire known universe or it's a total sham. You don't even have to read the researcher's paper to debunk their work. All you need is the phrase "politics change," and we're done here. But, just for fun, let's actually get into the paper and explain how prediction models work. A team of researchers from Ohio State University, University of Pittsburgh, and New York University gathered 174 US college students (median age 21) -- the vast majority of whom self-identified as liberal -- and conducted brain scans on them while they completed a short battery of tests.


Artificial intelligence may help identify osteoporosis on Dental panoramic radiographs

#artificialintelligence

Osteoporosis is becoming a global health issue due to increased life expectancy. However, it is difficult to detect in its early stages owing to a lack of discernible symptoms. Hence, screening for osteoporosis with widely used dental panoramic radiographs would be very cost-effective and useful. In this study, they investigated the use of deep learning to classify osteoporosis from dental panoramic radiographs. In addition, the effect of adding clinical covariate data to the radiographic images on the identification performance was assessed.


Deep Learning: Types and Applications in Healthcare

#artificialintelligence

Deep learning (DL), also known as deep structured learning or hierarchical learning, is a subset of machine learning. It is loosely based on the way neurons connect to each other to process information in animal brains. To mimic these connections, DL uses a layered algorithmic architecture known as artificial neural networks (ANNs) to analyze the data. By analyzing how data is filtered through the layers of the ANN and how the layers interact with each other, a DL algorithm can'learn' to make correlations and connections in the data. These capabilities make DL algorithms an innovative tool with the potential to transform healthcare.


Studying the brain to build AI that processes language as people do

#artificialintelligence

AI has made impressive strides in recent years, but it's still far from learning language as efficiently as humans. For instance, children learn that "orange" can refer to both a fruit and color from a few examples, but modern AI systems can't do this nearly as efficiently as people. This has led many researchers to wonder: Can studying the human brain help to build AI systems that can learn and reason like people do? Today, Meta AI is announcing a long-term research initiative to better understand how the human brain processes language. In collaboration with neuroimaging center Neurospin (CEA) and INRIA we're comparing how AI language models and the brain respond to the same spoken or written sentences.


Dogs understand praise the same way we do. Here's why that matters.

National Geographic

Every dog owner knows that saying Good dog! in a happy, high-pitched voice will evoke a flurry of joyful tail wagging in their pet. That made scientists curious: What exactly happens in your dog's brain when it hears praise, and is it similar to the hierarchical way our own brain processes such acoustic information? When a person gets a compliment, the more primitive, subcortical auditory regions first reacts to the intonation--the emotional force of spoken words. Next, the brain taps the more recently evolved auditory cortex to figure out the meaning of the words, which is learned. In 2016, a team of scientists discovered that dogs' brains, like those of humans, compute the intonation and meaning of a word separately--although dogs use their right brain hemisphere to do so, whereas we use our left hemisphere.