Machines use Google-type algorithms on biopsy images to help children get treatment faster. A study published in the open access journal JAMA Open Network today by scientists at the University of Virginia schools of Engineering and Medicine says machine learning algorithms applied to biopsy images can shorten the time for diagnosing and treating a gut disease that often causes permanent physical and cognitive damage in children from impoverished areas. In places where sanitation, potable water and food are scarce, there are high rates of children suffering from environmental enteric dysfunction, a disease that limits the gut's ability to absorb essential nutrients and can lead to stunted growth, impaired brain development and even death. The disease affects 20 percent of children under the age of 5 in low- and middle-income countries, such as Bangladesh, Zambia and Pakistan, but it also affects some children in rural Virginia. For Dr. Sana Syed, an assistant professor of pediatrics in the UVA School of Medicine, this project is an example of why she got into medicine.
Lifelong learning aims to develop machine learning systems that can learn new tasks while preserving the performance on previous tasks. This approach can be applied, for example, to prevent accident on autonomous vehicles by applying the knowledge learned on previous situations. In this paper we present a method to overcomes catastrophic forgetting that learns new tasks and preserves the performance on old tasks without accessing the data of the original model, by selective network augmentation, using convolutional neural networks for image classification. The experiment results showed that our method, in some scenarios outperforms the state-of-art Learning without Forgetting algorithm. Results also showed that in some situations is better to use our model instead of training a neural network using isolated learning.
Artificial intelligence (AI) tools trained to detect pneumonia on chest X-rays suffered significant decreases in performance when tested on data from outside health systems, according to a study conducted at the Icahn School of Medicine at Mount and published in a special issue of PLOS Medicine on machine learning and health care. These findings suggest that artificial intelligence in the medical space must be carefully tested for performance across a wide range of populations; otherwise, the deep learning models may not perform as accurately as expected. As interest in the use of computer system frameworks called convolutional neural networks (CNN) to analyze medical imaging and provide a computer-aided diagnosis grows, recent studies have suggested that AI image classification may not generalize to new data as well as commonly portrayed. Researchers at the Icahn School of Medicine at Mount Sinai assessed how AI models identified pneumonia in 158,000 chest X-rays across three medical institutions: the National Institutes of Health; The Mount Sinai Hospital; and Indiana University Hospital. Researchers chose to study the diagnosis of pneumonia on chest X-rays for its common occurrence, clinical significance, and prevalence in the research community.
This study compares various superlearner and deep learning architectures (machine-learning-based and neural-network-based) for classification problems across several simulated and industrial datasets to assess performance and computational efficiency, as both methods have nice theoretical convergence properties. Superlearner formulations outperform other methods at small to moderate sample sizes (500-2500) on nonlinear and mixed linear/nonlinear predictor relationship datasets, while deep neural networks perform well on linear predictor relationship datasets of all sizes. This suggests faster convergence of the superlearner compared to deep neural network architectures on many messy classification problems for real-world data. Superlearners also yield interpretable models, allowing users to examine important signals in the data; in addition, they offer flexible formulation, where users can retain good performance with low-computational-cost base algorithms. K-nearest-neighbor (KNN) regression demonstrates improvements using the superlearner framework, as well; KNN superlearners consistently outperform deep architectures and KNN regression, suggesting that superlearners may be better able to capture local and global geometric features through utilizing a variety of algorithms to probe the data space.
How smart is the form of artificial intelligence known as deep learning computer networks, and how closely do these machines mimic the human brain? They have improved greatly in recent years, but still have a long way to go, a team of UCLA cognitive psychologists reports in the journal PLOS Computational Biology. Supporters have expressed enthusiasm for the use of these networks to do many individual tasks, and even jobs, traditionally performed by people. However, results of the five experiments in this study showed that it's easy to fool the networks, and the networks' method of identifying objects using computer vision differs substantially from human vision. "The machines have severe limitations that we need to understand," said Philip Kellman, a UCLA distinguished professor of psychology and a senior author of the study.