Cosmologist Stephen Hawking and Tesla CEO Elon Musk endorsed a set of principles this week that have been established to ensure that self-thinking machines remain safe and act in humanity's best interests. Machines are getting more intelligent every year and researchers believe they could possess human levels of intelligence in the coming decades. Once they reach this point they could then start to improve themselves and create other, even more powerful AIs, known as superintelligences, according to Oxford philosopher Nick Bostrom and several others in the field. In 2014, Musk, who has his own $1 billion AI research company, warned that AI has the potential to be "more dangerous than nukes" while Hawking said in December 2014 that AI could end humanity. But there are two sides to the coin.
Artificial intelligence is an amazing technology that's changing the world in fantastic ways, but anybody who has ever seen the movie Terminator knows that there are some dangers associated with advanced AI. That's why Elon Musk, Stephen Hawking, and hundreds of other researchers, tech leaders, and scientists have endorsed a list of 23 guiding principles that should steer AI development in a productive, ethical, and safe direction. The Alisomar AI Principles were developed after the Future of Life Institute brought dozens of experts together for their Beneficial AI 2017 conference. The experts, whose ranks consisted of roboticists, physicists, economists, philosophers, and more, had fierce debates about AI safety, economic impact on human workers, and programming ethics, to name a few. In order to make the final list, 90 percent of the experts had to agree on its inclusion.
The principles are laid out to ensure that the rapidly developing artificial intelligence systems of the world remain in service of humanity. MANILA, Philippines – Recently, an artificial intelligence (AI) computer program called Libratus defeated 4 poker professionals – a "landmark step" for AI, said its creator, because poker had been particularly challenging for AI up until that point. AI's victory in the poker contest – following wins in chess and the even more complex boardgame "go" – is a reminder that AI is getting smarter all the time. How then does humanity make sure that it remains an ally and doesn't go rogue like Skynet did in the Terminator series? That was the question discussed by global experts at the Beneficial AI conference held in early January in California.
These principles were developed in conjunction with the 2017 Asilomar conference (videos here), through the process described here. Artificial intelligence has already provided beneficial tools that are used every day by people around the world. Its continued development, guided by the following principles, will offer amazing opportunities to help and empower people in the decades and centuries ahead. Please help build momentum by emailing the Principles link, futureoflife.org/ai-principles, To date, the Principles have been signed by AI/Robotics researchers and others.
It's the stuff of many a sci-fi book or movie - could robots one day become smart enough to overthrow us? Well, a group of the world's most eminent artificial intelligence experts have worked together to try and make sure that doesn't happen. They've put together a set of 23 principles to guide future research into AI, which have since been endorsed by hundreds more professionals, including Stephen Hawking and SpaceX CEO Elon Musk. Called the Asilomar AI Principles (after the beach in California, where they were thought up), the guidelines cover research issues, ethics and values, and longer-term issues - everything from how scientists should work with governments to how lethal weapons should be handled. On that point: "An arms race in lethal autonomous weapons should be avoided," says principle 18.