WORLD superpowers are engaged in a feverish "arms race" to develop the first killer robots completely removed from human control, the Sun Online can reveal. These machines will mark a dramatic escalation in computer AI from the drones and robots currently in use, all of which still require a human to press the "kill button". In a series of exclusive interviews, leading experts told The Sun Online machines making life or death decisions will likely be developed within the next 10 years. Fears are now growing about the implications of creating such smart machines, as are concerns they will fall into the hands of terrorist groups such as ISIS. Locked in this new race for military supremacy is Britain, the US, China, Russia and Israel – all of which have robot programmes of varying advancement.
The history of battle knows no bounds, with weapons of destruction evolving from prehistoric clubs, axes, and spears to bombs, drones, missiles, landmines, and systems used in biological and nuclear warfare. More recently, lethal autonomous weapon systems (LAWS) powered by artificial intelligence (AI) have begun to surface, raising ethical issues about the use of AI and causing disagreement on whether such weapons should be banned in line with international humanitarian laws under the Geneva Convention. Much of the disagreement around LAWS is based on where the line should be drawn between weapons with limited human control and autonomous weapons, and differences of opinion on whether more or less people will lose their lives as a result of the implementation of LAWS. There are also contrary views on whether autonomous weapons are already in play on the battlefield. Ronald Arkin, Regents' Professor and Director of the Mobile Robot Laboratory in the College of Computing at Georgia Institute of Technology, says limited autonomy is already present in weapon systems such as the U.S. Navy's Phalanx Close-In Weapons System, which is designed to identify and fire at incoming missiles or threatening aircraft, and Israel's Harpy system, a fire-and-forget weapon designed to detect, attack, and destroy radar emitters.
Killer robots must be banned to prevent unlawful killings, injuries and other violations of human rights'before it's too late', according to Amnesty International. The human rights non-profit is calling upon the United Nations to place tough new restraints on the development of autonomous weapon systems ahead of key negotiations in Geneva this week. The development of automated weapons, which can pick out and eliminate targets without input from a human being, has proliferated over the past decade. Countries including the UK, France, Israel and the US are known to be developing the technology for use in military and police operations. Amnesty International argues humans should remain'at the core of critical decisions' on the use of deadly force, such as the selection and engagement of targets.
NAIROBI (Thomson Reuters Foundation) - Countries are rapidly developing "killer robots" - machines with artificial intelligence (AI) that independently kill - but are moving at a snail's pace on agreeing global rules over their use in future wars, warn technology and human rights experts. From drones and missiles to tanks and submarines, semi-autonomous weapons systems have been used for decades to eliminate targets in modern day warfare - but they all have human supervision. Nations such as the United States, Russia and Israel are now investing in developing lethal autonomous weapons systems (LAWS) which can identify, target, and kill a person all on their own - but to date there are no international laws governing their use. "Some kind of human control is necessary ... Only humans can make context-specific judgements of distinction, proportionality and precautions in combat," said Peter Maurer, President of the International Committee of the Red Cross (ICRC).
Now do the same with tanks, helicopters and biped/quadruped robots. Welcome to the not-so-distant future of LAWs, or lethal autonomous weapon systems. A conclusion reached at the UN conference on regulating LAWs in warfare that took place this August in Geneva was that, instead of outright banning them, the topic should be revisited in November. The stall was initiated by the U.S., Russia, Israel, South Korea and Australia. Until the revision meeting one thing is sure -- AI-controlled robotic warfare isn't too far off.