Allowing machines to select and target humans sounds like something out of an apocalyptic sci-fi movie. But as we enter another decade, it is becoming increasingly obvious that we're teetering on the edge of that dangerous threshold. Countries including China, Israel, South Korea, Russia and the United States are already developing and deploying precursors to fully autonomous weapons, such as armed drones that are piloted remotely. These countries are investing heavily in military applications of artificial intelligence with the goal of gaining a technological advantage in next-generation preparedness for the battlefield. These killer robots, once activated, would select and engage targets without further human intervention.
The history of battle knows no bounds, with weapons of destruction evolving from prehistoric clubs, axes, and spears to bombs, drones, missiles, landmines, and systems used in biological and nuclear warfare. More recently, lethal autonomous weapon systems (LAWS) powered by artificial intelligence (AI) have begun to surface, raising ethical issues about the use of AI and causing disagreement on whether such weapons should be banned in line with international humanitarian laws under the Geneva Convention. Much of the disagreement around LAWS is based on where the line should be drawn between weapons with limited human control and autonomous weapons, and differences of opinion on whether more or less people will lose their lives as a result of the implementation of LAWS. There are also contrary views on whether autonomous weapons are already in play on the battlefield. Ronald Arkin, Regents' Professor and Director of the Mobile Robot Laboratory in the College of Computing at Georgia Institute of Technology, says limited autonomy is already present in weapon systems such as the U.S. Navy's Phalanx Close-In Weapons System, which is designed to identify and fire at incoming missiles or threatening aircraft, and Israel's Harpy system, a fire-and-forget weapon designed to detect, attack, and destroy radar emitters.
NAIROBI (Thomson Reuters Foundation) - Countries are rapidly developing "killer robots" - machines with artificial intelligence (AI) that independently kill - but are moving at a snail's pace on agreeing global rules over their use in future wars, warn technology and human rights experts. From drones and missiles to tanks and submarines, semi-autonomous weapons systems have been used for decades to eliminate targets in modern day warfare - but they all have human supervision. Nations such as the United States, Russia and Israel are now investing in developing lethal autonomous weapons systems (LAWS) which can identify, target, and kill a person all on their own - but to date there are no international laws governing their use. "Some kind of human control is necessary ... Only humans can make context-specific judgements of distinction, proportionality and precautions in combat," said Peter Maurer, President of the International Committee of the Red Cross (ICRC).
Now do the same with tanks, helicopters and biped/quadruped robots. Welcome to the not-so-distant future of LAWs, or lethal autonomous weapon systems. A conclusion reached at the UN conference on regulating LAWs in warfare that took place this August in Geneva was that, instead of outright banning them, the topic should be revisited in November. The stall was initiated by the U.S., Russia, Israel, South Korea and Australia. Until the revision meeting one thing is sure -- AI-controlled robotic warfare isn't too far off.
WORLD superpowers are engaged in a feverish "arms race" to develop the first killer robots completely removed from human control, the Sun Online can reveal. These machines will mark a dramatic escalation in computer AI from the drones and robots currently in use, all of which still require a human to press the "kill button". In a series of exclusive interviews, leading experts told The Sun Online machines making life or death decisions will likely be developed within the next 10 years. Fears are now growing about the implications of creating such smart machines, as are concerns they will fall into the hands of terrorist groups such as ISIS. Locked in this new race for military supremacy is Britain, the US, China, Russia and Israel – all of which have robot programmes of varying advancement.