The potential for advances in information-age technologies to undermine nuclear deterrence and influence the potential for nuclear escalation represents a critical question for international politics. One challenge is that uncertainty about the trajectory of technologies such as autonomous systems and artificial intelligence (AI) makes assessments difficult. This paper evaluates the relative impact of autonomous systems and artificial intelligence in three areas: nuclear command and control, nuclear delivery platforms and vehicles, and conventional applications of autonomous systems with consequences for nuclear stability. We argue that countries may be more likely to use risky forms of autonomy when they fear that their second-strike capabilities will be undermined. Additionally, the potential deployment of uninhabited, autonomous nuclear delivery platforms and vehicles could raise the prospect for accidents and miscalculation. Conventional military applications of autonomous systems could simultaneously influence nuclear force postures and first-strike stability in previously unanticipated ways. In particular, the need to fight at machine speed and the cognitive risk introduced by automation bias could increase the risk of unintended escalation. Finally, used properly, there should be many applications of more autonomous systems in nuclear operations that can increase reliability, reduce the risk of accidents, and buy more time for decision-makers in a crisis.
NAIROBI (Thomson Reuters Foundation) - Countries are rapidly developing "killer robots" - machines with artificial intelligence (AI) that independently kill - but are moving at a snail's pace on agreeing global rules over their use in future wars, warn technology and human rights experts. From drones and missiles to tanks and submarines, semi-autonomous weapons systems have been used for decades to eliminate targets in modern day warfare - but they all have human supervision. Nations such as the United States, Russia and Israel are now investing in developing lethal autonomous weapons systems (LAWS) which can identify, target, and kill a person all on their own - but to date there are no international laws governing their use. "Some kind of human control is necessary ... Only humans can make context-specific judgements of distinction, proportionality and precautions in combat," said Peter Maurer, President of the International Committee of the Red Cross (ICRC).
Now do the same with tanks, helicopters and biped/quadruped robots. Welcome to the not-so-distant future of LAWs, or lethal autonomous weapon systems. A conclusion reached at the UN conference on regulating LAWs in warfare that took place this August in Geneva was that, instead of outright banning them, the topic should be revisited in November. The stall was initiated by the U.S., Russia, Israel, South Korea and Australia. Until the revision meeting one thing is sure -- AI-controlled robotic warfare isn't too far off.
It is a terrifying vision of the future of battle. Called RoBattle, this heavy duty combat and support robot is strapped with a'robotic kit' consisting of vehicle control, navigation, RT mapping and autonomy, sensors and mission payloads. In addition to ambushing and attacking on command, this combat ready platform, developed by Israel Aerospace Industries (IAI), can raise its body four feet in the air to tackle obstacles or crouch down 23 inches to hide from enemies. It may be focused on the sky, but Israel Aerospace Industries has stepped down on land to develop the newest member of its unmanned ground robotic systems family. RoBattle is a combat and support robot equipped with a'robotic kit' of vehicle control, navigation, RT mapping and autonomy, sensors and mission payloads RoBattle, is an semi-autonomous combat and support robot designed to assist ground soldiers in the field.