To demonstrate the power of a new chip that can run artificially intelligent algorithms, researchers have put it in a doll and programmed it to recognise emotions in facial images captured by a small camera. The total cost of putting the new chip together is just €115 – an indicator of how easy it is becoming to give devices basic AI abilities. Recent advances in AI mean we already have algorithms that can recognise objects, lip-read, make basic decisions and more. "We will have wearable devices, toys, drones, small robots, and things we can't even imagine yet that will all have basic artificial intelligence," says Deniz.
Artificial intelligences that can negotiate effectively would make useful virtual assistants, says Mike Lewis at Facebook's research lab. Lewis and his team trained their bots on a database of more than 5000 text conversations between people playing a two-player game in which they had to decide how to divvy up a number of items. This bot was a much better negotiator but ended up using a nonsensical language impossible for humans to understand. This becomes even more important if dealmaking bots one day help negotiate things like insurance claims, for example.
But a neural network developed by UK artificial intelligence firm DeepMind could help bring it into focus by giving computers the ability to understand how different objects are related to each other. The ability to transfer abstract relations – such as whether something is to the left of another object or bigger than it – from one domain to another gives us a powerful mental toolkit with which to understand the world. The system answered these questions correctly 95.5 per cent of the time – slightly better than humans. To demonstrate its versatility, the relational reasoning part of the AI then had to answer questions about a set of very short stories, answering correctly 95 per cent of the time.
A little striped fish that lives among rocks in Lake Tanganyika in East Africa has the unexpected ability to recognise individual faces, which it uses to keep menacing strangers in sight. The cichlid (Julidochromis transcriptus) identifies unfamiliar individuals by looking at the pattern around their eyes rather than at other body parts such as their fins or trunk, researchers have discovered. After recent research showed that aquarium fish can be thought to identify the faces of their human owners, the Tanganyikan cichlid has now demonstrated how facial recognition is used in the wild. "We found that our subjects were especially guarded against only unfamiliar face models, regardless of body type," says Hotta.
But a neural network developed by UK artificial intelligence firm DeepMind that gives computers the ability to understand how different objects are related to each other could help bring it into focus. The ability to transfer abstract relations – such as whether something is to the left of another or bigger than it – from one domain to another gives us a powerful mental toolset with which to understand the world. The system answered these questions correctly 95.5 per cent of the time – slightly better than humans. To demonstrate its versatility, the relational reasoning part of the AI then had to answer questions about a set of very short stories, answering correctly 95 per cent of the time.
Previous work has identified that bundles of nerve fibres in the brain develop differently in infants with older siblings with autism from how they do in infants without this familial risk factor. The team used the brain scans from when the babies were 6 months old and behavioural data from when the children were 2 years old to train a machine-learning program to identify any brain connectivity patterns that might be linked to later signs of autism, such as repetitive behaviour, difficulties with language, or problems relating socially to others. After the training, the program used only the patterns from the 6-month-old brains to predict which of the children would show signs of autism at 2 years old. The goal is to use such a classifier system to identify infants likely to develop autism at an early age.
"If Facebook had this thinking, we might not have had such a problem with fake news," he says. Rather than pushing every article it thinks Facebook users want to see, an algorithm that was more uncertain of its abilities would be more likely to defer to a human's better judgement. The Berkeley team designed a mathematical model of an interaction between humans and robots called the "off-switch game" to explore the idea of a computer's "self-confidence". AIs that refuse to let humans turn them off might sound far-fetched, but such considerations should be critical for anyone making robots that work alongside humans, says Marta Kwiatkowska at the University of Oxford.
Machine translation systems that convert sign language into text and back again are helping people who are deaf or have difficulty hearing to communicate with those who cannot sign. A sign language user can approach a bank teller and sign to the KinTrans camera that they'd like assistance, for example. KinTrans's machine learning algorithm translates each sign as it is made and then a separate algorithm turns those signs into a sentence that makes grammatical sense. KinTrans founder Mohamed Elwazer says his system can already recognise thousands of signs in both American and Arabic sign language with 98 per cent accuracy.
There is a 50 per cent chance that machines will outperform humans in all tasks within 45 years, according to a survey of more than 350 artificial intelligence researchers. The results have "far-reaching social consequences," says Katja Grace at the Machine Intelligence Research Institute in Berkeley, California. Those in Asia typically gave shorter time frames than those in North America – predicting, for example, that AI would outperform humans on all tasks within 30 years, compared with 74 years. "They predict that AI will surpass humans at the video game StarCraft in six years, compared to all Atari games in nine years," says Georgios Yannakakis at the University of Malta in Msida.
So, rather than looking for a reward in the game world, the algorithm was rewarded for exploring and mastering skills that led to it discovering more about the world. This type of approach can speed up learning times and improve the efficiency of algorithms, says Max Jaderberg at Google's AI company DeepMind. Its algorithm learned much more quickly than conventional reinforcement learning approaches. Imbibed with a sense of curiosity, Pathak's own AI learnt to stomp on enemies and jump over pits in Mario and also learned to explore faraway rooms and walk down hallways in another game similar to Doom.