Potential and Peril

Communications of the ACM

The history of battle knows no bounds, with weapons of destruction evolving from prehistoric clubs, axes, and spears to bombs, drones, missiles, landmines, and systems used in biological and nuclear warfare. More recently, lethal autonomous weapon systems (LAWS) powered by artificial intelligence (AI) have begun to surface, raising ethical issues about the use of AI and causing disagreement on whether such weapons should be banned in line with international humanitarian laws under the Geneva Convention. Much of the disagreement around LAWS is based on where the line should be drawn between weapons with limited human control and autonomous weapons, and differences of opinion on whether more or less people will lose their lives as a result of the implementation of LAWS. There are also contrary views on whether autonomous weapons are already in play on the battlefield. Ronald Arkin, Regents' Professor and Director of the Mobile Robot Laboratory in the College of Computing at Georgia Institute of Technology, says limited autonomy is already present in weapon systems such as the U.S. Navy's Phalanx Close-In Weapons System, which is designed to identify and fire at incoming missiles or threatening aircraft, and Israel's Harpy system, a fire-and-forget weapon designed to detect, attack, and destroy radar emitters.


Thousands of scientists pledge not to help build killer AI robots

#artificialintelligence

Thousands of scientists who specialise in artificial intelligence (AI) have declared that they will not participate in the development or manufacture of robots that can identify and attack people without human oversight. Demis Hassabis at Google DeepMind and Elon Musk at the US rocket company SpaceX are among more than 2,400 signatories to the pledge which intends to deter military firms and nations from building lethal autonomous weapon systems, also known as Laws. The move is the latest from concerned scientists and organisations to highlight the dangers of handing over life and death decisions to AI-enhanced machines. It follows calls for a preemptive ban on technology that campaigners believe could usher in a new generation of weapons of mass destruction. Orchestrated by the Boston-based organisation, The Future of Life Institute, the pledge calls on governments to agree norms, laws and regulations that stigmatise and effectively outlaw the development of killer robots.


Losing Control: The Dangers of Killer Robots

#artificialintelligence

New technology could lead humans to relinquish control over decisions to use lethal force. As artificial intelligence advances, the possibility that machines could independently select and fire on targets is fast approaching. Fully autonomous weapons, also known as "killer robots," are quickly moving from the realm of science fiction toward reality. The unmanned Sea Hunter gets underway. At present it sails without weapons, but it exemplifies the move toward greater autonomy.


Losing Control: The Dangers of Killer Robots

#artificialintelligence

New technology could lead humans to relinquish control over decisions to use lethal force. As artificial intelligence advances, the possibility that machines could independently select and fire on targets is fast approaching. Fully autonomous weapons, also known as "killer robots," are quickly moving from the realm of science fiction toward reality. The unmanned Sea Hunter gets underway. At present it sails without weapons, but it exemplifies the move toward greater autonomy.


Scientists call for a ban on AI-controlled killer robots

#artificialintelligence

"We are not talking about walking, talking terminator robots that are about to take over the world; what we are concerned about is much more imminent: conventional weapons systems with autonomy," Human Right's Watch advocacy director Mary Wareham told the BBC. Another big question that arises: who is responsible when a machine does decide to take a human life? Is it the person who made the machine? "The delegation of authority to kill to a machine is not justified and a violation of human rights because machines are not moral agents and so cannot be responsible for making decisions of life and death," associate professor from the New School in New York Peter Asaro told the BBC. But not everybody is on board to fully denounce the use of AI-controlled weapon systems.