"Banning killer robots is both politically savvy and morally necessary," said Mary Wareham, the Arms Division advocacy director at Human Rights Watch and coordinator of the Campaign to Stop Killer Robots. "European states should take the lead and open ban treaty negotiations if they are serious about protecting the world from this horrific development." Countries attending the annual meeting of states parties to the Convention on Conventional Weapons (CCW) at the United Nations in Geneva will decide on November 15 whether to continue diplomatic talks on killer robots, also known as lethal autonomous weapons systems or fully autonomous weapons. Since 2014, these states have held eight meetings on lethal autonomous weapons systems under the auspices of the Convention on Conventional Weapons (CCW), a major disarmament treaty. Over the course of those meetings, states have built a shared understanding of concern, but they have struggled to reach agreement on credible recommendations for multilateral action due to the objections of a handful of military powers, most notably Russia and the United States.
From the spears hurled by Romans to the missiles launched by fighter pilots, the weapons humans use to kill each other have always been subject to improvement. Militaries seek to make each one ever-more lethal and, in doing so, better protect the soldier who wields it. But in the next evolution of combat, the U.S. Army is heading down a path that may lead humans off the battlefield entirely. Over the next few years, the Pentagon is poised to spend almost $1 billion for a range of robots designed to complement combat troops. Beyond scouting and explosives disposal, these new machines will sniff out hazardous chemicals or other agents, perform complex reconnaissance and even carry a soldier's gear.
Good news, fellow humans: The United Nations has decided to take on killer robots. At the international Convention on Conventional Weapons in Geneva, 123 participating nations voted to initiate official discussions on the danger of lethal autonomous weapons systems. That's the emerging designation for so-called "killer robots" -- weapons controlled by artificial intelligence that can target and strike without human intervention. The agreement is the latest development in a growing movement calling for an preemptive ban on weaponized A.I. and deadly autonomous weapons. Last year, a coalition of more than 1,000 scientists and industry leaders, including Elon Musk and representatives of Google and Microsoft, signed an official letter to the United Nations demanding action.
Allowing machines to select and target humans sounds like something out of an apocalyptic sci-fi movie. But as we enter another decade, it is becoming increasingly obvious that we're teetering on the edge of that dangerous threshold. Countries including China, Israel, South Korea, Russia and the United States are already developing and deploying precursors to fully autonomous weapons, such as armed drones that are piloted remotely. These countries are investing heavily in military applications of artificial intelligence with the goal of gaining a technological advantage in next-generation preparedness for the battlefield. These killer robots, once activated, would select and engage targets without further human intervention.