Do We Want Robot Warriors to Decide Who Lives or Dies?

IEEE Spectrum Robotics

Czech writer Karel?apek's 1920 play R.U.R. (Rossum's Universal Robots), which famously introduced the word robot to the world, begins with synthetic humans--the robots from the title--toiling in factories to produce low-cost goods. It ends with those same robots killing off the human race. Thus was born an enduring plot line in science fiction: robots spiraling out of control and turning into unstoppable killing machines. Twentieth-century literature and film would go on to bring us many more examples of robots wreaking havoc on the world, with Hollywood notably turning the theme into blockbuster franchises like The Matrix, Transformers, and The Terminator. Lately, fears of fiction turning to fact have been stoked by a confluence of developments, including important advances in artificial intelligence and robotics, along with the widespread use of combat drones and ground robots in Iraq and Afghanistan. The world's most powerful militaries are now developing ever more intelligent weapons, with varying degrees of autonomy and lethality.


Robots in the battlefield: Georgia Tech professor thinks AI can play a vital role

ZDNet

A pledge against the use of autonomous weapons was in July signed by over 2,400 individuals working in artificial intelligence (AI) and robotics representing 150 companies from 90 countries. The pledge, signed at the 2018 International Joint Conference on Artificial Intelligence (IJCAI) in Stockholm and organised by the Future of Life Institute, called on governments, academia, and industry to "create a future with strong international norms, regulations, and laws against lethal autonomous weapons". The institute defines lethal autonomous weapons systems -- also known as "killer robots" -- as weapons that can identify, target, and kill a person, without a human "in-the-loop". Arkin told D61 Live on Wednesday that instead of banning autonomous systems in war zones, they instead should be guided by strong legal and legislative directives. Citing a recent survey of 27,000 people by the European Commission, Arkin said 60 percent of respondents felt that robots should not be used for the care of children, the elderly, and the disabled, even though this is the space that most roboticists are playing in.


'Moral' Robots: the Future of War or Dystopian Fiction?

AITopics Original Links

The dawn of the 21st century has been called the decade of the drone. Unmanned aerial vehicles, remotely operated by pilots in the United States, rain Hellfire missiles on suspected insurgents in South Asia and the Middle East. Now a small group of scholars is grappling with what some believe could be the next generation of weaponry: lethal autonomous robots. At the center of the debate is Ronald C. Arkin, a Georgia Tech professor who has hypothesized lethal weapons systems that are ethically superior to human soldiers on the battlefield. A professor of robotics and ethics, he has devised algorithms for an "ethical governor" that he says could one day guide an aerial drone or ground robot to either shoot or hold its fire in accordance with internationally agreed-upon rules of war. But some scholars have dismissed Mr. Arkin's ethical governor as "vaporware," arguing that current technology is nowhere near the level of complexity that would be needed for a military robotic system to make life-and-death ethical judgments.


This isn't a sci-fi film: Autonomous Weapons Systems could be a reality soon - Firstpost

#artificialintelligence

The threat from such machines is real enough for 100 states to come together and debate the matter of their ban for three consecutive years now. The use of autonomous machines could potentially change the vocabulary of warfare, just like gun powder and nuclear arsenal upon their entry into the battlefield. In April 2013, NGOs associated with successful efforts to ban landmines and cluster munitions got together in London and issued a call to governments urging the negotiation of a treaty preventing the development, deployment and use of what are known as'Killer Robots' in popular parlance. In July 2015, some of the world's leading Artificial Intelligence (AI) scientists including Apple co-founder Steven Wozniak, Skype co-founder Jaan Tallin and Professor Stephen Hawking signed a letter with nearly 21,000 signatures asking for an outright ban on these autonomous weapons systems (AWS). "Autonomous weapons will become the Kalashnikovs of tomorrow," states the letter.


A Global Arms Race for Killer Robots Is Transforming the Battlefield

TIME - Tech

Over the weekend, experts on military artificial intelligence from more than 80 world governments converged on the U.N. offices in Geneva for the start of a week's talks on autonomous weapons systems. Many of them fear that after gunpowder and nuclear weapons, we are now on the brink of a "third revolution in warfare," heralded by killer robots--the fully autonomous weapons that could decide who to target and kill without human input. With autonomous technology already in development in several countries, the talks mark a crucial point for governments and activists who believe the U.N. should play a key role in regulating the technology. The meeting comes at a critical juncture. In July, Kalashnikov, the main defense contractor of the Russian government, announced it was developing a weapon that uses neural networks to make "shoot-no shoot" decisions.