The following correction was printed in the Guardian's Corrections and clarifications column on Wednesday November 1 2006 In the report below we describe John Pike as "director of global security and spokesman for the Federation of American Scientists". He has not held that position for some years. He is the founder and director of GlobalSecurity.org In November 2004, during the second battle of Fallujah, an American uncrewed aerial vehicle (UAV) - a robot plane - located a mortar battery that had been hampering the US operation to retake the town.The mortar's position was logged by the UAV's operator, who was sitting at his desk in Nellis Air Force base near Las Vegas, thousands of miles away. Using the internet, the operator contacted the operator of another armed UAV at a desk in central command ("Centcom") - a safe area away from the theatre of war, with centres in Kuwait, Qutar or Iraq.
The dawn of the 21st century has been called the decade of the drone. Unmanned aerial vehicles, remotely operated by pilots in the United States, rain Hellfire missiles on suspected insurgents in South Asia and the Middle East. Now a small group of scholars is grappling with what some believe could be the next generation of weaponry: lethal autonomous robots. At the center of the debate is Ronald C. Arkin, a Georgia Tech professor who has hypothesized lethal weapons systems that are ethically superior to human soldiers on the battlefield. A professor of robotics and ethics, he has devised algorithms for an "ethical governor" that he says could one day guide an aerial drone or ground robot to either shoot or hold its fire in accordance with internationally agreed-upon rules of war. But some scholars have dismissed Mr. Arkin's ethical governor as "vaporware," arguing that current technology is nowhere near the level of complexity that would be needed for a military robotic system to make life-and-death ethical judgments.
The future of war lies in part with what the military calls "autonomous weapons systems" (AWS), sophisticated computerized devices which, as defined by the U.S. Department of Defense, "once activated, can select and engage targets without further intervention by a human operator." Whether that's a good idea or a bad one is debatable, but it isn't a question of if, but how soon autonomous, artificially intelligent machines will fight side by side with human soldiers on the battlefield. United States Army General Robert W. Cone (now deceased) predicted in 2014 that as many as one-quarter of all U.S. combat soldiers might be replaced by drones and robots within the next 30 years. In the U.S., both the Army and Marine Corps are already testing remote-controlled devices like the Modular Advanced Armed Robotic System (MAARS), an unmanned ground vehicle (UGV) designed primarily for reconnaissance that can also be equipped with a grenade launcher and a machine gun: The latter are known as lethal autonomous weapons systems (LAWS for short, or more pithily, "killer robots," as critics have dubbed them). Though they may conjure up futuristic, dystopian images redolent of The Terminator (the Arnold Schwarzenegger film about an armed super-robot from the future) or Robopocalypse (Daniel Wilson's 2011 science fiction novel about AI weapons turning on their creators), the dangers they pose are firmly rooted in reality.
Czech writer Karel?apek's 1920 play R.U.R. (Rossum's Universal Robots), which famously introduced the word robot to the world, begins with synthetic humans--the robots from the title--toiling in factories to produce low-cost goods. It ends with those same robots killing off the human race. Thus was born an enduring plot line in science fiction: robots spiraling out of control and turning into unstoppable killing machines. Twentieth-century literature and film would go on to bring us many more examples of robots wreaking havoc on the world, with Hollywood notably turning the theme into blockbuster franchises like The Matrix, Transformers, and The Terminator. Lately, fears of fiction turning to fact have been stoked by a confluence of developments, including important advances in artificial intelligence and robotics, along with the widespread use of combat drones and ground robots in Iraq and Afghanistan. The world's most powerful militaries are now developing ever more intelligent weapons, with varying degrees of autonomy and lethality.
The Russian military is more technologically advanced than the U.S. realized and is quickly developing artificial intelligence capabilities to gain battlefield information advantage, an expansive new report commissioned by the Pentagon warned. The federally funded Center for Naval Analyses examined the Kremlin's whole-of-government approach for artificial intelligence development and found it is largely driven by the perceived threat from the United States, combined with lessons learned from its continuing conflicts in Syria and Ukraine about what the future battlefield will look like, the report released Monday said. However, the Russian government faces limitations because its AI efforts are primarily government funded, and it lacks a strong defense industrial base, noted the report, written on behalf of the Pentagon's Joint Artificial Intelligence Center. Still, analysts cautioned Pentagon leadership not to underestimate the Russia's technological advances as the U.S. pivots its strategic focus to the Indo-Pacific. The Russian military has been undergoing modernization since 2009.