"Many researchers … speculate that the information-processing abilities of biological neural systems must follow from highly parallel processes operating on representations that are distributed over many neurons. [Artificial neural networks] capture this kind of highly parallel computation based on distributed representations"
– from Machine Learning (Section 4.1.1; page 82) by Tom M. Mitchell, McGraw Hill Companies, Inc. (1997).
To diagnose depression, clinicians interview patients, asking specific questions -- about, say, past mental illnesses, lifestyle, and mood -- and identify the condition based on the patient's responses. In recent years, machine learning has been championed as a useful aid for diagnostics. Machine-learning models, for instance, have been developed that can detect words and intonations of speech that may indicate depression. But these models tend to predict that a person is depressed or not, based on the person's specific answers to specific questions. These methods are accurate, but their reliance on the type of question being asked limits how and where they can be used.
However even these reinforcement learning algorithms couldn't transfer what they'd learned about one task to acquiring a new task. In order to realize this achievement, DeepMind supercharged a reinforcement learning algorithm called A3C. In so-called actor-critic reinforcement learning, of which A3C is one variety, acting and learning are decoupled so that one neural network, the critic, evaluates the other, the actor. Together, they drive the learning process. This was already the state of the art, but DeepMind added a new off-policy correction algorithm called V-trace to the mix, which made the learning more efficient, and crucially, better able to achieve positive transfer between tasks.
A novel encryption method devised by MIT researchers secures data used in online neural networks, without dramatically slowing their runtimes. This approach holds promise for using cloud-based neural networks for medical-image analysis and other applications that use sensitive data. Outsourcing machine learning is a rising trend in industry. Major tech firms have launched cloud platforms that conduct computation-heavy tasks, such as, say, running data through a convolutional neural network (CNN) for image classification. Resource-strapped small businesses and other users can upload data to those services for a fee and get back results in several hours.
The team achieved a peak rate between 11.73 and 15.07 petaflops (single-precision) when running its data set on the Cori supercomputer. Machine learning, a form of artificial intelligence, enjoys unprecedented success in commercial applications. However, the use of machine learning in high performance computing for science has been limited. Why? Advanced machine learning tools weren't designed for big data sets, like those used to study stars and planets. A team from Intel, National Energy Research Scientific Computing Center (NERSC), and Stanford changed that.
DeepMind's artificial intelligence can now spot key signs of eye disease as well as the world's top doctors. Anonymous diagnostic data from almost 15,000 NHS patients was used to help the AI learn how to spot 10 key features of eye disease from complex optical coherence tomography (OCT) retinal scans. An OCT scan uses light rather than X-rays or ultrasound to generate 3D images of the back of the eye, revealing abnormalities that may be signs of disease. The system has the potential to prevent irreversible sight loss by ensuring that patients with the most serious eye conditions receive early treatment. DeepMind's new system was developed alongside scientists at Moorfields, University College London.
Irvine Calif.-based Syntiant thinks it can use embedded flash memory to greatly reduce the amount of power needed to perform deep-learning computations. Austin, Tex.-based Mythic thinks it can use embedded flash memory to greatly reduce the amount of power needed to perform deep-learning computations. They both might be right. A growing crowd of companies is hoping to deliver chips that accelerate otherwise onerous deep learning applications, and to some degree they all have similarities because "these are solutions that are created by the shape of the problem," explains Mythic founder and CTO Dave Fick. When executed in a CPU, that problem is shaped like a traffic jam of data.
In June of last year, five researchers at Facebook's Artificial Intelligence Research unit published an article showing how bots can simulate negotiation-like conversations. While for the most part the bots were able to maintain coherent dialogue, the researchers found that the software agents would occasionally generate strange sentences like: "Balls have zero to me to me to me to me to me to me to me to." On seeing these results, the team realized that they had failed to include a constraint that limited the bots to generating sentences within the parameters of spoken English, meaning that they developed a type of machine-English patois to communicate between themselves. These findings were considered to be fairly interesting by other experts in the field, but not totally surprising or groundbreaking. A month after this initial research was released, Fast Company published an article entitled AI Is Inventing Language Humans Can't Understand.
Hephaestus, the Greek god of blacksmiths, metalworking and carpenters, was said to have fashioned artificial beings in the form of golden robots. Myth finally moved toward truth in the 20th century, as AI developed in series of fits and starts, finally gaining major momentum--and reaching a tipping point--by the turn of the millennium. Here's how the modern history of AI and ML unfolded, starting in the years just following World War II. In 1950, while working at the University of Manchester, legendary code breaker Alan Turing (subject of the 2014 movie The Imitation Game) released a paper titled "Computing Machinery and Intelligence." It became famous for positing what became known as the "Turing test."
With all the excitement over neural networks and deep-learning techniques, it's easy to imagine that the world of computer science consists of little else. Neural networks, after all, have begun to outperform humans in tasks such as object and face recognition and in games such as chess, Go, and various arcade video games. These networks are based on the way the human brain works. Nothing could have more potential than that, right? An entirely different type of computing has the potential to be significantly more powerful than neural networks and deep learning.