Detecting Memorization in ReLU Networks Machine Learning

We propose a new notion of'non-linearity' of a network layer with respect to an input batch that is based on its proximity to a linear system, which is reflected in the nonnegative rank of the activation matrix. Considering batches of similar samples, we find that high non-linearity in deep layers is indicative of memorization. Furthermore, by applying our approach layer-by-layer, we find that the mechanism for memorization consists of distinct phases. We perform experiments on fully-connected and convolutional neural networks trained on several image and audio datasets. Our results demonstrate that as an indicator for memorization, our technique can be used to perform early stopping. A fundamental challenge in machine learning is balancing the bias-variance tradeoff, where overly simple learning models underfit the data (suboptimal performance on the training data) and overly complex models are expected to overfit or memorize the data (perfect training set performance, but suboptimal test set performance). The latter direction of this tradeoff has come into question with the observation that deep neural networks do not memorize their training data despite having sufficient capacity to do so (Zhang et al., 2016), the explanation of which is a matter of much interest.

AI can speed up precision medicine, New York Genome Center-IBM Watson study shows


The potential for artificial intelligence in precision medicine is big, according to conclusions of a new study by the New York Genome Center and IBM. The results, published in the July 11 issue of Neurology Genetics, a journal of the American Academy of Neurology, showed that researchers at the New York Genome Center, Rockefeller University and other institutions – along with IBM – verified the potential of IBM Watson for Genomics to analyze complex genomic data from state-of-the-art DNA sequencing of whole genomes. "This study documents the strong potential of Watson for Genomics to help clinicians scale precision oncology more broadly," Vanessa Michelini, Watson for Genomics Innovation Leader for IBM Watson Health, said in a statement. "Clinical and research leaders in cancer genomics are making tremendous progress towards bringing precision medicine to cancer patients, but genomic data interpretation is a significant obstacle, and that's where Watson can help." The proof of concept study compared multiple techniques used to analyze genomic data from a glioblastoma patient's tumor cells and normal healthy cells, putting to work a beta version of Watson for Genomics technology to help interpret whole genome sequencing data for one patient.

Scientists used artificial intelligence to discover a 2,000 year-old stick figure in Peru's mysterious Nazca Lines


Artificial intelligence has helped archaeologists uncover an ancient lost work of art. The Nazca Lines in Peru are ancient geoglyphs, images carved into the landscape. First formally studied in 1926, they depict people, animals, plants, and geometric shapes. The formations vary in size, with some of the biggest running up to 30 miles long. Their exact purpose is unknown, although some archaeologists think they may have had religious or spiritual significance.

New Research from the MIT-IBM Watson AI Lab Reveals How Work is Transforming IBM Research Blog


Rapid advancements in the field of artificial intelligence (AI) are uniquely poised to transform entire occupations and industries, changing the way work will be done in the future. It is imperative to understand the extent and nature of the changes so that we can prepare today for the jobs of tomorrow. New empirical work from the MIT-IBM Watson AI Lab uncovers how jobs will transform as AI and new technologies continue to scale across business and industries. We created a novel dataset using machine learning techniques on 170 million U.S. job postings. The dataset and research, The Future of Work: How New Technologies Are Transforming Tasks, allow us to extract key insights into how AI is shaping the future of work.

UVA Scientists Use Machine Learning to Improve Gut Disease Diagnosis


Machines use Google-type algorithms on biopsy images to help children get treatment faster. A study published in the open access journal JAMA Open Network today by scientists at the University of Virginia schools of Engineering and Medicine says machine learning algorithms applied to biopsy images can shorten the time for diagnosing and treating a gut disease that often causes permanent physical and cognitive damage in children from impoverished areas. In places where sanitation, potable water and food are scarce, there are high rates of children suffering from environmental enteric dysfunction, a disease that limits the gut's ability to absorb essential nutrients and can lead to stunted growth, impaired brain development and even death. The disease affects 20 percent of children under the age of 5 in low- and middle-income countries, such as Bangladesh, Zambia and Pakistan, but it also affects some children in rural Virginia. For Dr. Sana Syed, an assistant professor of pediatrics in the UVA School of Medicine, this project is an example of why she got into medicine.