It's predicted that sometime in the next 25 years, artificial intelligence machines will match - and in some ways - surpass human intelligence. The potential ripple effects of that are staggering. Corporations and governments are now spending billions of dollars on developing bigger and smarter A.I. technology. Their goal is to create machines that think for themselves. But some warn it could go terribly wrong, and the warning is being sounded by the likes of Bill Gates, Elon Musk and Stephen Hawking.
Cosmologist Stephen Hawking and Tesla CEO Elon Musk endorsed a set of principles that have been established to ensure that self-thinking machines remain safe and act in humanity's best interests. Machines are getting more intelligent every year and researchers believe they could possess human levels of intelligence in the coming decades. Once they reach this point they could then start to improve themselves and create even more powerful software, according to Oxford philosopher Nick Bostrom and several others in the field. In 2014, Musk warned that artificial intelligence has the potential to be "more dangerous than nukes" while Hawking said in December 2014 that AI could end humanity. AI could also help to cure cancer and slow down global warming.
The year is 2050 and super-intelligent robots have taken over the planet. Except you have no idea, because you're living in a computer simulation, depicting what life was like in 2015. Everything you see and touch right now has been created by robotic overlords who are using humanity as playthings in their virtual game. That's the radical theory put forward by a number of scientists over the years, who claim there is a possibility that our world as we know it is fake. The universe and everything you see in it is fake.
"I think there's recognition it makes sense to have some people thinking about [AI safety] now," says Oxford University philosophy professor Nick Bostrom. Over the past year, Oxford University philosophy professor Nick Bostrom has gained visibility for warning about the potential risks posed by more advanced forms of artificial intelligence. He now says that his warnings are earning the attention of companies pushing the boundaries of artificial intelligence research. Many people working on AI remain skeptical of or even hostile to Bostrom's ideas. But some prominent technologists and scientists--including Elon Musk, Stephen Hawking, and Bill Gates--have echoed some of his concerns.