Google Does Not Want A Robot Apocalypse To Happen, So It's Building A Button To Turn Off AI

#artificialintelligence 

For a generation that has been exposed to the Terminator movies, visions of a robot uprising come to mind whenever news about advancements in artificial intelligence surface. Great minds such as Tesla Motors and SpaceX CEO Elon Musk, famed astrophysicist Stephen Hawking and Apple co-founder Steve Wozniak have previously expressed their concern on the possibility of a robot apocalypse. It would seem that Google, one of the companies at the forefront of artificial intelligence development, is now sharing some of these concerns, as its DeepMind unit has published a study that seeks to implement safety measures on the technology. The paper, published as a collaboration between DeepMind and the Future of Humanity Institute of Oxford University, discusses a "big red button" that will allow humans to turn off artificial intelligence in a robot and take control of it in case the robot is misbehaving or malfunctioning. And just so it is clear, the Future of Humanity Institute is named as such as it wants humanity to have a future, with Nick Bostrom, its founding director, being one of the more vocal opponents of artificial intelligence.