"The field of Machine Learning seeks to answer these questions: How can we build computer systems that automatically improve with experience, and what are the fundamental laws that govern all learning processes?"
– from The Discipline of Machine Learning by Tom Mitchell. CMU-ML-06-108, 2006.
Austrian mathematician Kurt Gödel is known for his'incompleteness' theorems.Credit: Alfred Eisenstaedt/ LIFE Picture Coll./Getty A team of researchers has stumbled on a question that is mathematically unanswerable because it is linked to logical paradoxes discovered by Austrian mathematician Kurt Gödel in the 1930s that can't be solved using standard mathematics. The mathematicians, who were working on a machine-learning problem, show that the question of'learnability' -- whether an algorithm can extract a pattern from limited data -- is linked to a paradox known as the continuum hypothesis. Gödel showed that the statement cannot be proved either true or false using standard mathematical language. The latest result appeared on 7 January in Nature Machine Intelligence1.
During the twentieth century, discoveries in mathematical logic revolutionized our understanding of the very foundations of mathematics. In 1931, the logician Kurt Gödel showed that, in any system of axioms that is expressive enough to model arithmetic, some true statements will be unprovable1. And in the following decades, it was demonstrated that the continuum hypothesis -- which states that no set of distinct objects has a size larger than that of the integers but smaller than that of the real numbers -- can be neither proved nor refuted using the standard axioms of mathematics2–4. They identify a machine-learning problem whose fate depends on the continuum hypothesis, leaving its resolution forever beyond reach. Machine learning is concerned with the design and analysis of algorithms that can learn and improve their performance as they are exposed to data.
Two scientists at the University of Washington School of Medicine have developed a software program that represents the first use of deep artificial neural networks in squeak detection. University of Washington (UW) School of Medicine researchers have developed a software program to identify and decode rodent vocalizations. The DeepSqueak deep neural network converts audio signals into an image, or sonogram, which could be further refined with machine-vision algorithms developed for self-driving cars. Said the UW School of Medicine's Russell Marx, "DeepSqueak uses biomimetic algorithms that learn to isolate vocalizations by being given labeled examples of vocalizations and noise." According to co-developer Kevin Coffey, the program could distinguish between about 20 kinds of rodent calls.
When Google announced that it would absorb DeepMind's health division, it sparked a major controversy over data privacy. Though DeepMind confirmed that the move wouldn't actually hand raw patient data to Google, just the idea of giving a tech giant intimate, identifying medical records made people queasy. This problem with obtaining lots of high-quality data has become the biggest obstacle to applying machine learning in medicine. To get around the issue, AI researchers have been advancing new techniques for training machine-learning models while keeping the data confidential. The latest method, out of MIT, is called a split neural network: it allows one person to start training a deep-learning model and another person to finish.
Amazon SageMaker, Microsoft Azure ML Services, Google Cloud ML Engine, IBM Watson Knowledge Studio are examples of ML PaaS in the cloud. If your business wants to bring agility into machine learning model development and deployment, consider ML PaaS. It combines the proven technique of CI/CD with ML model management.
What jobs will AI probably not destroy? The jobs that are most susceptible to automation in the near term are those that are fundamentally routine or predictable in nature. If you have a boring job--where you come to work and do the same kinds of things again and again, you should probably worry. The tasks within jobs like this are likely to be encapsulated in the data that is collected by organizations. So it may only be a matter of time before a powerful machine learning algorithm comes along that can automate much of this work.
In early December, researchers at DeepMind, the artificial-intelligence company owned by Google's parent corporation, Alphabet Inc., filed a dispatch from the frontiers of chess. A year earlier, on Dec. 5, 2017, the team had stunned the chess world with its announcement of AlphaZero, a machine-learning algorithm that had mastered not only chess but shogi, or Japanese chess, and Go. The algorithm started with no knowledge of the games beyond their basic rules. It then played against itself millions of times and learned from its mistakes. In a matter of hours, the algorithm became the best player, human or computer, the world has ever seen.
Researchers at Stanford University engineers used a deep learning computer model to identify every solar panel in the continuous U.S. from satellite images. Stanford University engineers have developed a method for locating every solar panel in the contiguous U.S. from a massive satellite image database via a deep learning computer model. The researchers used a pre-trained model called Inception as the basis for the DeepSolar neural network's clustering and classifying of pixels in images. DeepSolar scanned more than 1 billion image "tiles," comprising areas bigger than a neighborhood but smaller than a zip code; each tile had 102,400 pixels, and DeepSolar classified each pixel in each tile, determining whether it was likely part of a solar panel or not. The network completed this task in less than a month, ascertaining that regions with more sun exposure had greater solar panel adoption than areas with less average sunlight.
Although deep learning holds enormous promise for advancing new discoveries in genomics, it also should be implemented mindfully and with appropriate caution. Deep learning should be applied to biological datasets of sufficient size, usually on the order of thousands of samples. The'black box' nature of deep neural networks is an intrinsic property and does not necessarily lend itself well to complete understanding or transparency. Subtle variations in the input data can have outsized effects and must be controlled for as well as possible. Importantly, deep learning methods should be compared with simpler machine learning models with fewer parameters to ensure that the additional model complexity afforded by deep learning has not led to overfitting of the data.