Using raw data from the entirety of a patient's electronic health record, Google researchers have developed an artificial intelligence network capable of predicting the course of their disease and risk of death during a hospital stay, with much more accuracy than previous methods. The deep learning models were trained on over 216,000 deidentified EHRs from more than 114,000 adult patients, who had been hospitalized for at least one day at either the University of California, San Francisco or the University of Chicago. For those two academic medical centers, the AI predicted the risks of mortality, readmission and prolonged stays, as well as discharge diagnoses, by ICD-9 code. The network was 95% accurate in predicting a patient's risk of dying while in the hospital--with a much lower rate of false alerts--than the traditional regressive model--the augmented Early Warning Score--which measures 28 factors and was about 85% accurate at the two centers. The researchers' findings were published last month in the Nature journal npj Digital Medicine.
--Brain networks have received considerable attention given the critical significance for understanding human brain organization, for investigating neurological disorders and for clinical diagnostic applications. Most existing works in brain network analysis focus on either structural or functional connectivity, which cannot leverage the complementary information from each other . Although multi-view learning methods have been proposed to learn from both networks (or views), these methods aim to reach a consensus among multiple views, and thus distinct intrinsic properties of each view may be ignored. How to jointly learn representations from structural and functional brain networks while preserving their inherent properties is a critical problem. In this paper, we propose a framework of Siamese community-preserving graph convolutional network (SCP-GCN) to learn the structural and functional joint embedding of brain networks. Specifically, we use graph convolutions to learn the structural and functional joint embedding, where the graph structure is defined with structural connectivity and node features are from the functional connectivity. Moreover, we propose to preserve the community structure of brain networks in the graph convolutions by considering the intra-community and inter-community properties in the learning process. Furthermore, we use Siamese architecture which models the pairwise similarity learning to guide the learning process. T o evaluate the proposed approach, we conduct extensive experiments on two real brain network datasets. The experimental results demonstrate the superior performance of the proposed approach in structural and functional joint embedding for neurological disorder analysis, indicating its promising value for clinical applications. This work was done when the author was at the University of Illinois at Chicago.
Behind most of today's artificial intelligence technologies, from self-driving cars to facial recognition and virtual assistants, lie artificial neural networks. Though based loosely on the way neurons communicate in the brain, these "deep learning" systems remain incapable of many basic functions that would be essential for primates and other organisms. However, a new study from University of Chicago neuroscientists found that adapting a well-known brain mechanism can dramatically improve the ability of artificial neural networks to learn multiple tasks and avoid the persistent AI challenge of "catastrophic forgetting." The study, published in Proceedings of the National Academy of Sciences, provides a unique example of how neuroscience research can inform new computer science strategies, and, conversely, how AI technology can help scientists better understand the human brain. When combined with previously reported methods for stabilizing synaptic connections in artificial neural networks, the new algorithm allowed single artificial neural networks to learn and perform hundreds of tasks with only minimal loss of accuracy, potentially enabling more powerful and efficient AI technologies.
In the past 10 years, the best-performing artificial-intelligence systems--such as the speech recognizers on smartphones or Google's latest automatic translator--have resulted from a technique called "deep learning." Deep learning is in fact a new name for an approach to artificial intelligence called neural networks, which have been going in and out of fashion for more than 70 years. Neural networks were first proposed in 1944 by Warren McCullough and Walter Pitts, two University of Chicago researchers who moved to MIT in 1952 as founding members of what's sometimes called the first cognitive science department. Neural nets were a major area of research in both neuroscience and computer science until 1969, when, according to computer science lore, they were killed off by the MIT mathematicians Marvin Minsky and Seymour Papert, who a year later would become co-directors of the new MIT Artificial Intelligence Laboratory. The technique then enjoyed a resurgence in the 1980s, fell into eclipse again in the first decade of the new century, and has returned like gangbusters in the second, fueled largely by the increased processing power of graphics chips.
Physicians have long used visual judgment of medical images to determine the course of cancer treatment. A new program package from Fraunhofer researchers reveals changes in images and facilitates this task using deep learning. The experts will demonstrate this software in Chicago from November 27 to December 2 at RSNA, the world's largest radiology meeting. Has a tumor shrunk during the course of treatment over several months, or have new tumors developed? To answer questions like these, physicians often perform CT and MRI scans.