Elon Musk and more than 100 leaders and experts in artificial intelligence (AI) have come together urging the UN to commit to an outright ban on killer robot technology. An open letter signed by Musk, Google Deepmind's Mustafa Suleyman, and 114 other AI and robotics specialists urges the UN to prevent "the third revolution in warfare" by banning the development of all lethal autonomous weapon systems. The open letter, released to coincide with the world's largest conference on AI – IJCAI 2017, which is taking place in Melbourne, Australia this week – warns of a near future where independent machines will be able to choose and engage their own targets, including innocent humans in addition to enemy combatants. "Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend," the consortium writes. "These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways."
Elon Musk, Google DeepMind co-founder Mustafa Suleyman, and 114 other leading AI and robotics experts have joined together to ask the UN to ban the use of so-called killer robots in an open letter published today. The group is concerned about the potential use of lethal autonomous weapons and how they might be applied in the future, and they penned a short note released by the Future of Life Institute. The text was made public to kick off the opening of the International Joint Conference on Artificial Intelligence (IJCAI 2017) in Melbourne, Australia, according to a press release. "Lethal autonomous weapons" refers to the drones, autonomous machine guns, tanks, and other forms of weaponry controlled by AI on next-generation battlefields. Musk, for one, is famously wary of AI's potential to go bad, recently calling it "the greatest threat we face as a civilization," above even nuclear weapons -- but the open letter is the first time a group of AI and robotics companies have joined forces to petition the UN specifically about autonomous weapons, according to the release.
But it could be a real threat, warn researchers at the recent World Economic Forum. Unlike today's drones, which are still controlled by human operators, autonomous weapons could potentially be programmed to select and engage targets on their own. "It was one of the concerns that we itemized last year," Toby Walsh, professor of artificial intelligence (AI) at the school of computer science and engineering at the University of New South Wales, told FoxNews.com. "Most of us believe that we don't have the ability to build ethical robots," he added. "What is especially worrying is that the various militaries around the world will be fielding robots in just a few years, and we don't think anyone will be building ethical robots."
Over a hundred experts in robotics and artificial intelligence are calling on the UN to ban the development and use of killer robots and add them to a list of'morally wrong' weapons including blinding lasers and chemical weapons. Google's Mustafa Suleyman and Tesla's Elon Musk are among the most prominent names on a list of 116 tech experts who have signed an open letter asking the UN to ban autonomous weapons in a bid to prevent an arms race. In December 2016 the UN voted to begin formal talks over the future of such weapons, including tanks, drones and automated machine guns. So far, 19 out of 123 member states have called for an outright ban on lethal autonomous weapons. One of the letter's key organisers, Toby Walsh, a professor of artificial intelligence at the University of New South Wales in Australia unveiled the letter at the opening of the International Joint Conference on Artificial Intelligence in Melbourne.
Artificial intelligence researchers from nearly 30 countries are boycotting a South Korean university over concerns a new lab in partnership with a leading defence company could lead to "killer robots". More than 50 leading academics signed the letter calling for a boycott of Korea Advanced Institute of Science and Technology (KAIST) and its partner, defence manufacturer Hanwha Systems. The researchers said they would not collaborate with the university or host visitors from KAIST over fears it sought to "accelerate the arms race to develop" autonomous weapons. "There are plenty of great things you can do with AI that save lives, including in a military context, but to openly declare the goal is to develop autonomous weapons and have a partner like this sparks huge concern," said Toby Walsh, the organiser of the boycott and a professor at the University of New South Wales. "This is a very respected university partnering with a very ethically dubious partner that continues to violate international norms."