WASHINGTON D.C. [USA]: According to a recent study, a new artificial intelligence technology can accurately identify rare genetic disorders using a photograph of a patient's face. Named DeepGestalt, the AI technology outperformed clinicians in identifying a range of syndromes in three trials and could add value in personalised care, CNN reported. The study was published in the journal Nature Medicine. According to the study, eight per cent of the population has disease with key genetic components and many may have recognisable facial features. The study further adds that the technology could identify, for example, Angelman syndrome, a disorder affecting the nervous system with characteristic features such as a wide mouth with widely spaced teeth etc. Speaking about it, Yaron Gurovich, the chief technology officer at FDNA and lead researcher of the study said, "It demonstrates how one can successfully apply state of the art algorithms, such as deep learning, to a challenging field where the available data is small, unbalanced in terms of available patients per condition, and where the need to support a large amount of conditions is great."
A fully automated artificial intelligence (AI)-based multispectral absorbance imaging system effectively classified function and potency of induced pluripotent stem cell derived retinal pigment epithelial cells (iPSC-RPE) from patients with age-related macular degeneration (AMD). The finding from the system could be applied to assessing future cellular therapies, according to research presented at the 2018 ARVO annual meeting. The software, which uses convolutional neural network (CNN) deep learning algorithms, effectively evaluated release criterion for the iPSC-RPE cell-based therapy in a standard, reproducible, and cost-effective fashion. The AI-based analysis was as specific and sensitive as traditional molecular and physiological assays, without the need for human intervention. "Cells can be classified with high accuracy using nothing but absorbance images," wrote lead investigator Nathan Hotaling and colleagues from the National Institutes of Health in their poster.
In this study, we proposed a convolutional neural network model for gender prediction using English Twitter text as input. Ensemble of proposed model achieved an accuracy at 0.8237 on gender prediction and compared favorably with the state-of-the-art performance in a recent author profiling task. We further leveraged the trained models to predict the gender labels from an HPV vaccine related corpus and identified gender difference in public perceptions regarding HPV vaccine. The findings are largely consistent with previous survey-based studies.
"One man's trash is another man's treasure," is a familiar expression. When it comes to health and genomics, "junk" DNA may turn out to be a goldmine. In a recent study, Princeton University-led researchers used whole-genome sequencing and artificial intelligence (AI) deep learning to identify the contribution of noncoding mutations to autism risk--demonstrating that mutations in "junk" DNA can contribute to a complex disease. The study was led by Princeton professor Olga Troyanskaya, who is also deputy director for genomics at the Flatiron Institute's Center for Computational Biology (CCB) in New York City, along with professor Robert Darnell of The Rockefeller University, also an investigator at the Howard Hughes Medical Institute. Published on May 27 in Nature Genetics, the study presented an AI deep learning framework that "predicts the specific regulatory effects and the deleterious impact of genetic variants," and used it on autism spectrum disorder (ASD).
For effective treatment of Alzheimer disease (AD), it is important to identify subjects who are most likely to exhibit rapid cognitive decline. Herein, we developed a novel framework based on a deep convolutional neural network which can predict future cognitive decline in mild cognitive impairment (MCI) patients using flurodeoxyglucose and florbetapir positron emission tomography (PET). The architecture of the network only relies on baseline PET studies of AD and normal subjects as the training dataset. Feature extraction and complicated image preprocessing including nonlinear warping are unnecessary for our approach. Accuracy of prediction (84.2%) for conversion to AD in MCI patients outperformed conventional feature-based quantification approaches. ROC analyses revealed that performance of CNN-based approach was significantly higher than that of the conventional quantification methods (p < 0.05). Output scores of the network were strongly correlated with the longitudinal change in cognitive measurements. These results show the feasibility of deep learning as a tool for predicting disease outcome using brain images.