Computers can now keep SECRETS: Google's neural network is learning to encrypt its own messages Experts like Professor Stephen Hawking and Elon Musk have warned of the dangers of artificial intelligence becoming too smart and turning against humanity. Now it seems a team at Google has brought computing another step towards this nightmare becoming a reality, by teaching its networks to keep secrets. The computer systems have learn how to protect their messages away from prying eyes. Ateam at Google has taught its networks to keep secrets. Just last week, Professor Stephen Hawking warned artificial intelligence could develop a will of its own that is in conflict with that of humanity.
Two students have built an AI that could be the basis of future killer robots. In a controversial move, the pair trained an AI bot to kill human players within the classic video game Doom. Critics have expressed concern over the AI technology and the risk it could pose to humans in future. Devendra Chaplot and Guillaume Lample, from Carnegie Mellon University in Pittsburgh trained an AI bot - nicknamed Arnold - using'deep reinforcement learning' techniques. While Google's AI software had previously been shown to tackle vintage 2D Atari games such as Space Invaders, the students wanted to expand the technology to tackle three-dimensional first-person shooter games like Doom.
That's how long Google's Director of Engineering Ray Kurzweil thinks it will take for computers to reach human levels of intelligence. "By 2029, computers will have human-level intelligence," Kurzweil said in an interview at the SXSW Conference with Shira Lazar and Amy Kurzweil Comix. Known as the Singularity, the event is oft discussed by scientists, futurists, technology stalwarts and others as a time when artificial intelligence will cause machines to become smarter than human beings. The time frame is much sooner than what other stalwarts have said, including British theoretical physicist Stephen Hawking, as well as previous predictions from Kurzweil, who said it may occur as soon as 2045. Softbank CEO Masayoshi Son, who recently acquired ARM Holdings with the intent on being one of the driving forces in the Singularity, has previously said it could happen in the next 30 years.
Northwestern University's Ken Forbus is closing the gap between humans and machines. Using cognitive science theories, Forbus and his collaborators have developed a model that could give computers the ability to reason more like humans and even make moral decisions. Called the structure-mapping engine (SME), the new model is capable of analogical problem solving, including capturing the way humans spontaneously use analogies between situations to solve moral dilemmas. "In terms of thinking like humans, analogies are where it's at," said Forbus, Walter P. Murphy Professor of Electrical Engineering and Computer Science in Northwestern's McCormick School of Engineering. "Humans use relational statements fluidly to describe things, solve problems, indicate causality, and weigh moral dilemmas."
The Turing test has always been an approximate benchmark for good AI. In the test, a human is supposed to converse with a machine over text for five minutes; if the human doesn't realize that they are talking to a machine, then the computer passes as AI "indistinguishable" from human intelligence. DON'T MISS: To make the iPhone exciting again, Apple has to launch... an Android phone? Earlier this year, Georgia Tech professor Ashok Goel noticed he was spread thin for teaching assistants for his computer science course. So Goel programmed IBM's Watson system to work as an online chatbot, answering some of the 10,000 online questions submitted by students during the course.