Neural Networks: AI-Alerts

Google's AI Guru Wants Computers to Think More Like Brains


In the early 1970s, a British grad student named Geoff Hinton began to make simple mathematical models of how neurons in the human brain visually understand the world. Artificial neural networks, as they are called, remained an impractical technology for decades. But in 2012, Hinton and two of his grad students at the University of Toronto used them to deliver a big jump in the accuracy with which computers could recognize objects in photos. Within six months, Google had acquired a startup founded by the three researchers. Previously obscure, artificial neural networks were the talk of Silicon Valley.

IBM SpectrumAI Brings Scalable Storage To Deep Learning


AI and deep learning are invading the enterprise. NVIDIA Corporation is in the midst of an unprecedented run, delivering targeted technology and products that enable companies to learn from their data. These learnings can lead to competitive insights, recognizing new trends, fueling control systems for intelligent infrastructure, or simply providing predictive capabilities to better manage the business. The challenge in deploying these systems is one of balance. Storage in the datacenter has evolved to service the needs of mainstream business applications, not highly-parallel deep learning systems.

DeepMind Achieves Holy Grail: An AI That Can Master Games Like Chess and Go Without Human Help

IEEE Spectrum Robotics Channel

DeepMind, the London-based subsidiary of Alphabet, has created a system that can quickly master any game in the class that includes chess, Go, and Shogi, and do so without human guidance. The system, called AlphaZero, began its life last year by beating a DeepMind system that had been specialized just for Go. That earlier system had itself made history by beating one of the world's best Go players, but it needed human help to get through a months-long course of improvement. AlphaZero trained itself--in just 3 days. AlphaZero, playing White against Stockfish, began by identifying four candidate moves.

DeepMind - Wikipedia


DeepMind Technologies is a British artificial intelligence company founded in September 2010, currently owned by Alphabet Inc.. The company is based in London, but has research centres in California, Canada[4], and France[5]. Acquired by Google in 2014, the company has created a neural network that learns how to play video games in a fashion similar to that of humans,[6] as well as a Neural Turing machine,[7] or a neural network that may be able to access an external memory like a conventional Turing machine, resulting in a computer that mimics the short-term memory of the human brain.[8][9] The company made headlines in 2016 after its AlphaGo program beat a human professional Go player for the first time in October 2015[10] and again when AlphaGo beat Lee Sedol, the world champion, in a five-game match, which was the subject of a documentary film.[11] A more generic program, AlphaZero, beat the most powerful programs playing go, chess and shogi (Japanese chess) after a few hours of play against itself using reinforcement learning.[12]

Machine learning spots natural selection at work in human genome


The ability to sequence genomes quickly has provided scientists with reams of data, but understanding how evolution has shaped humans is still a difficult task.Credit: Guy Tear/Wellcome Coll./CC Pinpointing where and how the human genome is evolving can be like hunting for a needle in a haystack. Each person's genome contains three billion building blocks called nucleotides, and researchers must compile data from thousands of people to discover patterns that signal how genes have been shaped by evolutionary pressures. To find these patterns, a growing number of geneticists are turning to a form of machine learning called deep learning. Proponents of the approach say that deep-learning algorithms incorporate fewer explicit assumptions about what the genetic signatures of natural selection should look like than do conventional statistical methods.

Machine learning, meet quantum computing


Back in 1958, in the earliest days of the computing revolution, the US Office of Naval Research organized a press conference to unveil a device invented by a psychologist named Frank Rosenblatt at the Cornell Aeronautical Laboratory. Rosenblatt called his device a perceptron, and the New York Times reported that it was "the embryo of an electronic computer that [the Navy] expects will be able to walk, talk, see, write, reproduce itself, and be conscious of its existence." Those claims turned out to be somewhat overblown. But the device kick-started a field of research that still has huge potential today. A perceptron is a single-layer neural network.

Google's takeover of health app appears to renege on DeepMind promises

New Scientist

Another tech company doing something it said it wouldn't. Another eye roll, another shrug? On Tuesday, the London-based artificial intelligence company DeepMind announced that the team behind Streams – an app designed to monitor people in hospital with kidney disease – will be joining DeepMind's sister company Google. The tech giant wants to turn Streams into an AI-powered assistant for doctors and nurses. To create Streams, DeepMind used identifiable medical records of 1.6 million people obtained in a deal with the Royal …

Human Brain-Sized Artificial Intelligence (AI): Coming Soon To A Cloud Data Center Near You


Data center-hosted artificial intelligence is rapidly proliferating in both government and commercial markets, and while it's an exciting time for AI, only a narrow set of applications is being addressed, primarily limited to neural networks based on convolutional approach. Other categories of AI include general AI, symbolic AI and bio-AI, and all three require different processing demands and run distinctly different algorithms. Virtually all of today's commercial AI systems run neural network applications. But much more control-intensive and powerful AI workloads using symbolic AI, bio-AI and general AI algorithms are ill-suited to GPU/TPU architectures. Today, commercial and governmental entities that need AI solutions are using workarounds to achieve more compute power for their neural net applications, and chief among them is specialty processors like Google TPUs and NVIDIA GPUs, provisioned in data centers specifically for AI workloads.

Artificial Intelligence Has a Strange New Muse: Our Sense of Smell


Today's artificial intelligence systems, including the artificial neural networks broadly inspired by the neurons and connections of the nervous system, perform wonderfully at tasks with known constraints. They also tend to require a lot of computational power and vast quantities of training data. That all serves to make them great at playing chess or Go, at detecting if there's a car in an image, at differentiating between depictions of cats and dogs. "But they are rather pathetic at composing music or writing short stories," said Konrad Kording, a computational neuroscientist at the University of Pennsylvania. "They have great trouble reasoning meaningfully in the world." Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

Model can more naturally detect depression in conversations

MIT News

To diagnose depression, clinicians interview patients, asking specific questions -- about, say, past mental illnesses, lifestyle, and mood -- and identify the condition based on the patient's responses. In recent years, machine learning has been championed as a useful aid for diagnostics. Machine-learning models, for instance, have been developed that can detect words and intonations of speech that may indicate depression. But these models tend to predict that a person is depressed or not, based on the person's specific answers to specific questions. These methods are accurate, but their reliance on the type of question being asked limits how and where they can be used.