Computers can now keep SECRETS: Google's neural network is learning to encrypt its own messages Experts like Professor Stephen Hawking and Elon Musk have warned of the dangers of artificial intelligence becoming too smart and turning against humanity. Now it seems a team at Google has brought computing another step towards this nightmare becoming a reality, by teaching its networks to keep secrets. The computer systems have learn how to protect their messages away from prying eyes. Ateam at Google has taught its networks to keep secrets. Just last week, Professor Stephen Hawking warned artificial intelligence could develop a will of its own that is in conflict with that of humanity.
Two students have built an AI that could be the basis of future killer robots. In a controversial move, the pair trained an AI bot to kill human players within the classic video game Doom. Critics have expressed concern over the AI technology and the risk it could pose to humans in future. Devendra Chaplot and Guillaume Lample, from Carnegie Mellon University in Pittsburgh trained an AI bot - nicknamed Arnold - using'deep reinforcement learning' techniques. While Google's AI software had previously been shown to tackle vintage 2D Atari games such as Space Invaders, the students wanted to expand the technology to tackle three-dimensional first-person shooter games like Doom.
That's how long Google's Director of Engineering Ray Kurzweil thinks it will take for computers to reach human levels of intelligence. "By 2029, computers will have human-level intelligence," Kurzweil said in an interview at the SXSW Conference with Shira Lazar and Amy Kurzweil Comix. Known as the Singularity, the event is oft discussed by scientists, futurists, technology stalwarts and others as a time when artificial intelligence will cause machines to become smarter than human beings. The time frame is much sooner than what other stalwarts have said, including British theoretical physicist Stephen Hawking, as well as previous predictions from Kurzweil, who said it may occur as soon as 2045. Softbank CEO Masayoshi Son, who recently acquired ARM Holdings with the intent on being one of the driving forces in the Singularity, has previously said it could happen in the next 30 years.
Northwestern University's Ken Forbus is closing the gap between humans and machines. Using cognitive science theories, Forbus and his collaborators have developed a model that could give computers the ability to reason more like humans and even make moral decisions. Called the structure-mapping engine (SME), the new model is capable of analogical problem solving, including capturing the way humans spontaneously use analogies between situations to solve moral dilemmas. "In terms of thinking like humans, analogies are where it's at," said Forbus, Walter P. Murphy Professor of Electrical Engineering and Computer Science in Northwestern's McCormick School of Engineering. "Humans use relational statements fluidly to describe things, solve problems, indicate causality, and weigh moral dilemmas."
Alan Turing Institute lends support to'Commission on Artificial Intelligence' published by the Science and Technology Committee. The establishment of an AI ethics board in the UK has taken a big step forward, with the Alan Turing Institute agreeing to work with the UK government to explore the ethics questions surrounding the development of artificial development. In a letter to the Science and Technology Committee, the Alan Turing Institute welcomed the Committee's recent Report on Robotics and Artificial Intelligence and put itself forward as an institution prepared to take a leading role in taking AI forward. The letter to Committee chair Stephen Metcalfe was in response to a report published by the Committee on 12 October 2016, in which it recommended that a'standing Commission on Artificial Intelligence' be established at the Alan Turing Institute to examine the social, ethical and legal implications of recent and potential developments in AI. "Your Report recommends that a standing Commission on Artificial Intelligence be Should this recommendation be taken forward, we would very much welcome the opportunity to lead the creation of the Commission." MP Stephen Metcalfe returned support in kind, saying in response to the Institute's letter: "We welcome the Alan Turing Institute's support for our report on Robotics and Artificial Intelligence and are pleased that, as the UK's new data science research institute, it is ready to lead the standing Commission on Artificial Intelligence that we recommended establishing" Debate surrounding the ethics concerned with AI has been gathering speed in recent times, with Melanie Mitchell recently telling CBR that the AI community is'not very well prepared' when it comes to the ethical issues that come with using AI in life-critical areas.