Collaborating Authors

Robots with Guns: The Rise of Autonomous Weapons Systems


The future of war lies in part with what the military calls "autonomous weapons systems" (AWS), sophisticated computerized devices which, as defined by the U.S. Department of Defense, "once activated, can select and engage targets without further intervention by a human operator." Whether that's a good idea or a bad one is debatable, but it isn't a question of if, but how soon autonomous, artificially intelligent machines will fight side by side with human soldiers on the battlefield. United States Army General Robert W. Cone (now deceased) predicted in 2014 that as many as one-quarter of all U.S. combat soldiers might be replaced by drones and robots within the next 30 years. In the U.S., both the Army and Marine Corps are already testing remote-controlled devices like the Modular Advanced Armed Robotic System (MAARS), an unmanned ground vehicle (UGV) designed primarily for reconnaissance that can also be equipped with a grenade launcher and a machine gun: The latter are known as lethal autonomous weapons systems (LAWS for short, or more pithily, "killer robots," as critics have dubbed them). Though they may conjure up futuristic, dystopian images redolent of The Terminator (the Arnold Schwarzenegger film about an armed super-robot from the future) or Robopocalypse (Daniel Wilson's 2011 science fiction novel about AI weapons turning on their creators), the dangers they pose are firmly rooted in reality.

World Must Keep Lethal Weapons Under Human Control, Germany Says

U.S. News

Killer robots that make life-or-death decisions on the basis of anonymous data sets, and completely beyond human control, are already a shockingly real prospect today,

Ban or No Ban, Hard Questions Remain on Autonomous Weapons

AITopics Original Links

This is a guest post. The views expressed here are solely those of the authors and do not represent positions of IEEE Spectrum or the IEEE. Last month, over 1,000 robotics and artificial intelligence researchers signed an open letter calling for a ban on offensive autonomous weapons, putting new energy into an already spirited debate about the role of autonomy in weapons of the future. These researchers join an ongoing conversation among lawyers, ethicists, academics, activists, and defense professionals on potential future weapons that would select, engage, and destroy targets without a human in the loop. As AI experts, the authors of the letter can help militaries better understand the risks associated with increasingly intelligent and autonomous systems, and we welcome their contribution to the discussion.

Warfare Will Be Revolutionized: UN Debates Autonomous Weapons, Many Call for Ban


Talks on lethal autonomous weapons systems began at the United Nations November 13, amid calls for an international ban on independent "killer robots" that could revolutionize warfare -- discussions are scheduled to last all week, under the banner of the Convention on Certain Conventional Weapons.

Deadly US Applications of Artificial Intelligence


In the United States and around the world, public concern is rising at the prospect of weapons systems that would select and attack targets without human intervention. As the United States Department of Defense releases a strategy on artificial intelligence (AI), questions loom about whether the US government intends to accelerate its investments in weapons systems that would select and engage targets without meaningful human control. The strategy considers a range of potential, mostly benign uses of AI and makes the bold claim that AI can help "reduce the risk of civilian casualties" by enabling greater accuracy and precision. The strategy commits to consider how to handle hacking, bias, and "unexpected behavior" among other concerns. Scientists have long warned about the potentially disastrous consequences that could arise when complex algorithms incorporated into fully autonomous weapons systems created and deployed by opposing forces meet in warfare.