Potential and Peril

Communications of the ACM

The history of battle knows no bounds, with weapons of destruction evolving from prehistoric clubs, axes, and spears to bombs, drones, missiles, landmines, and systems used in biological and nuclear warfare. More recently, lethal autonomous weapon systems (LAWS) powered by artificial intelligence (AI) have begun to surface, raising ethical issues about the use of AI and causing disagreement on whether such weapons should be banned in line with international humanitarian laws under the Geneva Convention. Much of the disagreement around LAWS is based on where the line should be drawn between weapons with limited human control and autonomous weapons, and differences of opinion on whether more or less people will lose their lives as a result of the implementation of LAWS. There are also contrary views on whether autonomous weapons are already in play on the battlefield. Ronald Arkin, Regents' Professor and Director of the Mobile Robot Laboratory in the College of Computing at Georgia Institute of Technology, says limited autonomy is already present in weapon systems such as the U.S. Navy's Phalanx Close-In Weapons System, which is designed to identify and fire at incoming missiles or threatening aircraft, and Israel's Harpy system, a fire-and-forget weapon designed to detect, attack, and destroy radar emitters.


Thousands of scientists pledge not to help build killer AI robots

#artificialintelligence

Thousands of scientists who specialise in artificial intelligence (AI) have declared that they will not participate in the development or manufacture of robots that can identify and attack people without human oversight. Demis Hassabis at Google DeepMind and Elon Musk at the US rocket company SpaceX are among more than 2,400 signatories to the pledge which intends to deter military firms and nations from building lethal autonomous weapon systems, also known as Laws. The move is the latest from concerned scientists and organisations to highlight the dangers of handing over life and death decisions to AI-enhanced machines. It follows calls for a preemptive ban on technology that campaigners believe could usher in a new generation of weapons of mass destruction. Orchestrated by the Boston-based organisation, The Future of Life Institute, the pledge calls on governments to agree norms, laws and regulations that stigmatise and effectively outlaw the development of killer robots.


Losing Control: The Dangers of Killer Robots

#artificialintelligence

New technology could lead humans to relinquish control over decisions to use lethal force. As artificial intelligence advances, the possibility that machines could independently select and fire on targets is fast approaching. Fully autonomous weapons, also known as "killer robots," are quickly moving from the realm of science fiction toward reality. The unmanned Sea Hunter gets underway. At present it sails without weapons, but it exemplifies the move toward greater autonomy.


Losing Control: The Dangers of Killer Robots

#artificialintelligence

New technology could lead humans to relinquish control over decisions to use lethal force. As artificial intelligence advances, the possibility that machines could independently select and fire on targets is fast approaching. Fully autonomous weapons, also known as "killer robots," are quickly moving from the realm of science fiction toward reality. The unmanned Sea Hunter gets underway. At present it sails without weapons, but it exemplifies the move toward greater autonomy.


Why Should We Ban Autonomous Weapons? To Survive.

IEEE Spectrum Robotics

This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE. Killer robots pose a threat to all of us. In the movies, this threat is usually personified as an evil machine bent on destroying humanity for reasons of its own. In reality, the threat comes from within us.