The risk from robot weapons


One of the greatest risks of the next few decades is likely to come from robotic weapons, AI (artificial intelligence) weapons, or lethal autonomous weapons (LAWs). In August 2017 as many as 116 specialists from 26 countries, including some of the world's leading robotics and artificial intelligence pioneers, called on the United Nations to ban the development and use of killer robots. They even said that this arms race threatens to usher in the'third revolution' in warfare after gunpowder and nuclear arms. They wrote, "Once developed lethal autonomous weapons will permit armed conflict to be fought at a scale greater than ever, and at time scales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent population, and weapons hacked to behave in undesirable ways."

Killer autonomous weapons are coming... but they're not here yet


Pioneers from the worlds of artificial intelligence and robotics – including Elon Musk and Deepmind's Mustafa Suleyman – have asked the United Nations to ban autonomous weapon systems. A letter from the experts says the weapons currently under development risk opening a "Pandora's box" that if left open could create a dangerous "third revolution in warfare". The open letter coincides with the International Joint Conference on Artificial Intelligence, which is currently being held in Melbourne, Australia. Ahead of the same conference in 2015, the Telsa founder was joined by Steven Hawking, Steve Wozniak and Noam Chomsky in condemning a new "global arms race". Suggestions that warfare will be transformed by artificially intelligent weapons capable of making their own decisions about who to kill are not hyperbolic.

Robots in the battlefield: Georgia Tech professor thinks AI can play a vital role


A pledge against the use of autonomous weapons was in July signed by over 2,400 individuals working in artificial intelligence (AI) and robotics representing 150 companies from 90 countries. The pledge, signed at the 2018 International Joint Conference on Artificial Intelligence (IJCAI) in Stockholm and organised by the Future of Life Institute, called on governments, academia, and industry to "create a future with strong international norms, regulations, and laws against lethal autonomous weapons". The institute defines lethal autonomous weapons systems -- also known as "killer robots" -- as weapons that can identify, target, and kill a person, without a human "in-the-loop". Arkin told D61 Live on Wednesday that instead of banning autonomous systems in war zones, they instead should be guided by strong legal and legislative directives. Citing a recent survey of 27,000 people by the European Commission, Arkin said 60 percent of respondents felt that robots should not be used for the care of children, the elderly, and the disabled, even though this is the space that most roboticists are playing in.

Podcast: Six Experts Explain the Killer Robots Debate - Future of Life Institute


Why are so many AI researchers so worried about lethal autonomous weapons? What makes autonomous weapons so much worse than any other weapons we have today? And why is it so hard for countries to come to a consensus about autonomous weapons? Not surprisingly, the short answer is: it's complicated. In this month's podcast, Ariel spoke with experts from a variety of perspectives on the current status of LAWS, where we are headed, and the feasibility of banning these weapons. Guests include ex-Pentagon advisor Paul Scharre (3:40), artificial intelligence professor Toby Walsh (40:51), Article 36 founder Richard Moyes (53:30), Campaign to Stop Killer Robots founders Mary Wareham and Bonnie Docherty (1:03:38), and ethicist and co-founder of the International Committee for Robot Arms Control, Peter Asaro (1:32:39). You can listen to the podcast above, and read the full transcript below. You can check out previous podcasts on SoundCloud, iTunes, GooglePlay, and Stitcher. If you work with ...

Forget Killer Robots: Autonomous Weapons Are Already Online


Earlier this year, concerns over the development of autonomous military systems -- essentially AI-driven machinery capable of making battlefield decisions, including the selection of targets -- were once again the center of attention at a United Nations meeting in Geneva. "Where is the line going to be drawn between human and machine decision-making?" Paul Scharre, director of the Technology and National Security Program at the Center for a New American Security in Washington, D.C., told Time magazine. "Are we going to be willing to delegate lethal authority to the machine?" "Malicious computer programs that could be described as'intelligent autonomous agents' are what steal people's data."