Click click snap: One look at patient's face, and AI can identify rare genetic diseases

#artificialintelligence

WASHINGTON D.C. [USA]: According to a recent study, a new artificial intelligence technology can accurately identify rare genetic disorders using a photograph of a patient's face. Named DeepGestalt, the AI technology outperformed clinicians in identifying a range of syndromes in three trials and could add value in personalised care, CNN reported. The study was published in the journal Nature Medicine. According to the study, eight per cent of the population has disease with key genetic components and many may have recognisable facial features. The study further adds that the technology could identify, for example, Angelman syndrome, a disorder affecting the nervous system with characteristic features such as a wide mouth with widely spaced teeth etc. Speaking about it, Yaron Gurovich, the chief technology officer at FDNA and lead researcher of the study said, "It demonstrates how one can successfully apply state of the art algorithms, such as deep learning, to a challenging field where the available data is small, unbalanced in terms of available patients per condition, and where the need to support a large amount of conditions is great."


Texas hospital struggles to make IBM's Watson cure cancer

PCWorld

If IBM is looking for a new application for its Watson machine learning tools, it might consider putting health care providers' procurement and systems integration woes ahead of curing cancer. The fall-out from that project has now prompted the resignation of the cancer center's president, Ronald DePinho, the Wall Street Journal reported Thursday. The university recently published an internal audit report into the procurement processes that led it to hand almost $40 million to IBM and over $21 million to PwC for work on the project, almost all of it without board approval. It noted that the scope of its review was limited to contracting and procurement practices and compliance issues, and did not cover project management and system development activities. The audit "should not be interpreted as an opinion on the scientific basis or functional capabilities of the system in its current state," because a separate review of those aspects of the project is being conducted by an external consultant, it said.


Solving Large Scale Phylogenetic Problems using DCM2

AAAI Conferences

Tandy J. Warnow Department of Computer Science University of Arizona Tucson AZ USA email: tandy cs, arizona, edu Abstract In an earlier paper, we described a new method for phylogenetic tree reconstruction called the Disk Covering Method, or DCM. This is a general method which can be used with an)' existing phylogenetic method in order to improve its performance, lCre showed analytically and experimentally that when DCM is used in conjunction with polynomial time distance-based methods, it improves the accuracy of the trees reconstructed. In this paper, we discuss a variant on DCM, that we call DCM2. DCM2 is designed to be used with phylogenetic methods whose objective is the solution of NPhard optimization problems. We also motivate the need for solutions to NPhard optimization problems by showing that on some very large and important datasets, the most popular (and presumably best performing) polynomial time distance methods have poor accuracy. Introduction 118 HUSON The accurate recovery of the phylogenetic branching order from molecular sequence data is fundamental to many problems in biology. Multiple sequence alignment, gene function prediction, protein structure, and drug design all depend on phylogenetic inference. Although many methods exist for the inference of phylogenetic trees, biologists who specialize in systematics typically compute Maximum Parsimony (MP) or Maximum Likelihood (ML) trees because they are thought to be the best predictors of accurate branching order. Unfortunately, MP and ML optimization problems are NPhard, and typical heuristics use hill-climbing techniques to search through an exponentially large space. When large numbers of taxa are involved, the computational cost of MP and ML methods is so great that it may take years of computation for a local minimum to be obtained on a single dataset (Chase et al. 1993; Rice, Donoghue, & Olmstead 1997). It is because of this computational cost that many biologists resort to distance-based calculations, such as Neighbor-Joining (NJ) (Saitou & Nei 1987), even though these may poor accuracy when the diameter of the tree is large (Huson et al. 1998). As DNA sequencing methods advance, large, divergent, biological datasets are becoming commonplace. For example, the February, 1999 issue of Molecular Biology and Evolution contained five distinct datascts of more than 50 taxa, and two others that had been pruned below that.


Google AI predicts hospital inpatient death risks with 95% accuracy

#artificialintelligence

Using raw data from the entirety of a patient's electronic health record, Google researchers have developed an artificial intelligence network capable of predicting the course of their disease and risk of death during a hospital stay, with much more accuracy than previous methods. The deep learning models were trained on over 216,000 deidentified EHRs from more than 114,000 adult patients, who had been hospitalized for at least one day at either the University of California, San Francisco or the University of Chicago. For those two academic medical centers, the AI predicted the risks of mortality, readmission and prolonged stays, as well as discharge diagnoses, by ICD-9 code. The network was 95% accurate in predicting a patient's risk of dying while in the hospital--with a much lower rate of false alerts--than the traditional regressive model--the augmented Early Warning Score--which measures 28 factors and was about 85% accurate at the two centers. The researchers' findings were published last month in the Nature journal npj Digital Medicine.


Artificial intelligence virtual consultant helps deliver better patient care

#artificialintelligence

WASHINGTON, DC (March 8, 2017)--Interventional radiologists at the University of California at Los Angeles (UCLA) are using technology found in self-driving cars to power a machine learning application that helps guide patients' interventional radiology care, according to research presented today at the Society of Interventional Radiology's 2017 Annual Scientific Meeting. The researchers used cutting-edge artificial intelligence to create a "chatbot" interventional radiologist that can automatically communicate with referring clinicians and quickly provide evidence-based answers to frequently asked questions. This allows the referring physician to provide real-time information to the patient about the next phase of treatment, or basic information about an interventional radiology treatment. "We theorized that artificial intelligence could be used in a low-cost, automated way in interventional radiology as a way to improve patient care," said Edward W. Lee, M.D., Ph.D., assistant professor of radiology at UCLA's David Geffen School of Medicine and one of the authors of the study. "Because artificial intelligence has already begun transforming many industries, it has great potential to also transform health care."