Cosmologist Stephen Hawking and Tesla CEO Elon Musk endorsed a set of principles this week that have been established to ensure that self-thinking machines remain safe and act in humanity's best interests. Machines are getting more intelligent every year and researchers believe they could possess human levels of intelligence in the coming decades. Once they reach this point they could then start to improve themselves and create other, even more powerful AIs, known as superintelligences, according to Oxford philosopher Nick Bostrom and several others in the field. In 2014, Musk, who has his own $1 billion AI research company, warned that AI has the potential to be "more dangerous than nukes" while Hawking said in December 2014 that AI could end humanity. But there are two sides to the coin.
Artificial intelligence is an amazing technology that's changing the world in fantastic ways, but anybody who has ever seen the movie Terminator knows that there are some dangers associated with advanced AI. That's why Elon Musk, Stephen Hawking, and hundreds of other researchers, tech leaders, and scientists have endorsed a list of 23 guiding principles that should steer AI development in a productive, ethical, and safe direction. The Alisomar AI Principles were developed after the Future of Life Institute brought dozens of experts together for their Beneficial AI 2017 conference. The experts, whose ranks consisted of roboticists, physicists, economists, philosophers, and more, had fierce debates about AI safety, economic impact on human workers, and programming ethics, to name a few. In order to make the final list, 90 percent of the experts had to agree on its inclusion.
These principles were developed in conjunction with the 2017 Asilomar conference (videos here), through the process described here. Artificial intelligence has already provided beneficial tools that are used every day by people around the world. Its continued development, guided by the following principles, will offer amazing opportunities to help and empower people in the decades and centuries ahead. Please help build momentum by emailing the Principles link, futureoflife.org/ai-principles, To date, the Principles have been signed by AI/Robotics researchers and others.
The principles are laid out to ensure that the rapidly developing artificial intelligence systems of the world remain in service of humanity. MANILA, Philippines – Recently, an artificial intelligence (AI) computer program called Libratus defeated 4 poker professionals – a "landmark step" for AI, said its creator, because poker had been particularly challenging for AI up until that point. AI's victory in the poker contest – following wins in chess and the even more complex boardgame "go" – is a reminder that AI is getting smarter all the time. How then does humanity make sure that it remains an ally and doesn't go rogue like Skynet did in the Terminator series? That was the question discussed by global experts at the Beneficial AI conference held in early January in California.
Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence. Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as: How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked? How can we grow our prosperity through automation while maintaining people's resources and purpose? How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI? What set of values should AI be aligned with, and what legal and ethical status should it have? Science-Policy Link: There should be constructive and healthy exchange between AI researchers and policy-makers.