Goto

Collaborating Authors

 national institutes




A Lesion-aware Edge-based Graph Neural Network for Predicting Language Ability in Patients with Post-stroke Aphasia

Chen, Zijian, Varkanitsa, Maria, Ishwar, Prakash, Konrad, Janusz, Betke, Margrit, Kiran, Swathi, Venkataraman, Archana

arXiv.org Artificial Intelligence

We propose a lesion-aware graph neural network (LEGNet) to predict language ability from resting-state fMRI (rs-fMRI) connectivity in patients with post-stroke aphasia. Our model integrates three components: an edge-based learning module that encodes functional connectivity between brain regions, a lesion encoding module, and a subgraph learning module that leverages functional similarities for prediction. We use synthetic data derived from the Human Connectome Project (HCP) for hyperparameter tuning and model pretraining. We then evaluate the performance using repeated 10-fold cross-validation on an in-house neuroimaging dataset of post-stroke aphasia. Our results demonstrate that LEGNet outperforms baseline deep learning methods in predicting language ability. LEGNet also exhibits superior generalization ability when tested on a second in-house dataset that was acquired under a slightly different neuroimaging protocol. Taken together, the results of this study highlight the potential of LEGNet in effectively learning the relationships between rs-fMRI connectivity and language ability in a patient cohort with brain lesions for improved post-stroke aphasia evaluation.


Role of Dependency Distance in Text Simplification: A Human vs ChatGPT Simplification Comparison

Lee, Sumi, Leroy, Gondy, Kauchak, David, Just, Melissa

arXiv.org Artificial Intelligence

This study investigates human and ChatGPT text simplification and its relationship to dependency distance. A set of 220 sentences, with increasing grammatical difficulty as measured in a prior user study, were simplified by a human expert and using ChatGPT. We found that the three sentence sets all differed in mean dependency distances: the highest in the original sentence set, followed by ChatGPT simplified sentences, and the human simplified sentences showed the lowest mean dependency distance. Introduction Enhancing the understandability of biomedical information is vital in fostering health-literate patients. However, empirical evidence shows that readability formulas are not appropriate tools [1], [2].


Using YOLO v7 to Detect Kidney in Magnetic Resonance Imaging

Anari, Pouria Yazdian, Obiezu, Fiona, Lay, Nathan, Firouzabadi, Fatemeh Dehghani, Chaurasia, Aditi, Golagha, Mahshid, Singh, Shiva, Homayounieh, Fatemeh, Zahergivar, Aryan, Harmon, Stephanie, Turkbey, Evrim, Gautam, Rabindra, Ma, Kevin, Merino, Maria, Jones, Elizabeth C., Ball, Mark W., Linehan, W. Marston, Turkbey, Baris, Malayeri, Ashkan A.

arXiv.org Artificial Intelligence

Introduction This study explores the use of the latest You Only Look Once (YOLO V7) object detection method to enhance kidney detection in medical imaging by training and testing a modified YOLO V7 on medical image formats. Methods Study includes 878 patients with various subtypes of renal cell carcinoma (RCC) and 206 patients with normal kidneys. A total of 5657 MRI scans for 1084 patients were retrieved. 326 patients with 1034 tumors recruited from a retrospective maintained database, and bounding boxes were drawn around their tumors. A primary model was trained on 80% of annotated cases, with 20% saved for testing (primary test set). The best primary model was then used to identify tumors in the remaining 861 patients and bounding box coordinates were generated on their scans using the model. Ten benchmark training sets were created with generated coordinates on not-segmented patients. The final model used to predict the kidney in the primary test set. We reported the positive predictive value (PPV), sensitivity, and mean average precision (mAP). Results The primary training set showed an average PPV of 0.94 +/- 0.01, sensitivity of 0.87 +/- 0.04, and mAP of 0.91 +/- 0.02. The best primary model yielded a PPV of 0.97, sensitivity of 0.92, and mAP of 0.95. The final model demonstrated an average PPV of 0.95 +/- 0.03, sensitivity of 0.98 +/- 0.004, and mAP of 0.95 +/- 0.01. Conclusion Using a semi-supervised approach with a medical image library, we developed a high-performing model for kidney detection. Further external validation is required to assess the model's generalizability.


AI can spot early signs of Alzheimer's in speech patterns, study shows: Newsroom - UT Southwestern, Dallas, Texas

#artificialintelligence

DALLAS – April 12, 2023 – New technologies that can capture subtle changes in a patient's voice may help physicians diagnose cognitive impairment and Alzheimer's disease before symptoms begin to show, according to a UT Southwestern Medical Center researcher who led a study published in the Alzheimer's Association publication Diagnosis, Assessment & Disease Monitoring. "Our focus was on identifying subtle language and audio changes that are present in the very early stages of Alzheimer's disease but not easily recognizable by family members or an individual's primary care physician," said Ihab Hajjar, M.D., Professor of Neurology at UT Southwestern's Peter O'Donnell Jr. Brain Institute. Researchers used advanced machine learning and natural language processing (NLP) tools to assess speech patterns in 206 people – 114 who met the criteria for mild cognitive decline and 92 who were unimpaired. The team then mapped those findings to commonly used biomarkers to determine their efficacy in measuring impairment. Study participants, who were enrolled in a research program at Emory University in Atlanta, were given several standard cognitive assessments before being asked to record a spontaneous 1- to 2-minute description of artwork.


Deep learning for AI-based diagnosis of skin-related neglected tropical diseases: a pilot study

#artificialintelligence

Background Deep learning, which is a part of a broader concept of artificial intelligence (AI) and/or machine learning has achieved remarkable success in vision tasks. While there is growing interest in the use of this technology in diagnostic support for skin-related neglected tropical diseases (skin NTDs), there have been limited studies in this area and fewer focused on dark skin. In this study, we aimed to develop deep learning based AI models with clinical images we collected for five skin NTDs, namely, Buruli ulcer, leprosy, mycetoma, scabies, and yaws, to understand how diagnostic accuracy can or cannot be improved using different models and training patterns. Methodology This study used photographs collected prospectively in Côte d'Ivoire and Ghana through our ongoing studies with use of digital health tools for clinical data documentation and for teledermatology. Our dataset included a total of 1,709 images from 506 patients.


Study: AI Behind ChatGPT Could Help Spot Early Signs of Alzheimer's Disease

#artificialintelligence

The artificial intelligence algorithms behind the chatbot program ChatGPT -- which has drawn attention for its ability to generate humanlike written responses to some of the most creative queries -- might one day be able to help doctors detect Alzheimer's Disease in its early stages. Research from Drexel University's School of Biomedical Engineering, Science and Health Systems recently demonstrated that OpenAI's GPT-3 program can identify clues from spontaneous speech that are 80% accurate in predicting the early stages of dementia. Reported in the journal PLOS Digital Health, the Drexel study is the latest in a series of efforts to show the effectiveness of natural language processing programs for early prediction of Alzheimer's – leveraging current research suggesting that language impairment can be an early indicator of neurodegenerative disorders. The current practice for diagnosing Alzheimer's Disease typically involves a medical history review and lengthy set of physical and neurological evaluations and tests. While there is still no cure for the disease, spotting it early can give patients more options for therapeutics and support.


Ultromics secures FDA clearance for heart failure AI software

#artificialintelligence

Ultromics has secured clearance from the U.S. Food and Drug Administration (FDA) for its artificial intelligence (AI)-based echocardiography software for the detection of heart failure with preserved ejection fraction (HFpEF). EchoGo Heart Failure uses AI to identify HFpEF from a single echocardiogram image, according to the firm. The clearance come just weeks after Ultromics joined the Foundation for the National Institutes of Health Accelerating Medicines Partnership Heart Failure program, which is a collaboration between the National Institutes of Health, the National Heart Lung and Blood Institute, the FDA, the American Heart Association, the American Society of Echocardiography, and industry, the company said.


UTC professor uses artificial intelligence to crack the longevity code

#artificialintelligence

Hong Qin, a computer science professor at the University of Tennessee at Chattanooga, was born in a town on the eastern coast of China not far from the birthplace of Confucius. The great Chinese philosopher once said, "Real knowledge is to know the extent of one's ignorance." Confucius was probably onto something when he said real knowledge is knowing your limits. Qin (pronounced "chin") works in a field, computational biology, that's so intricate that it helps to have an appreciation for the limits of the human brain. More and more, human researchers such as Qin are humbling themselves and allowing artificial intelligence models and supercomputers do the heavy lifting of scientific discovery.