Goto

Collaborating Authors

 wareham


AI's 'Oppenheimer moment': autonomous weapons enter the battlefield

The Guardian

A squad of soldiers is under attack and pinned down by rockets in the close quarters of urban combat. One of them makes a call over his radio, and within moments a fleet of small autonomous drones equipped with explosives fly through the town square, entering buildings and scanning for enemies before detonating on command. One by one the suicide drones seek out and kill their targets. A voiceover on the video, a fictional ad for multibillion-dollar Israeli weapons company Elbit Systems, touts the AI-enabled drones' ability to "maximize lethality and combat tempo". While defense companies like Elbit promote their new advancements in artificial intelligence (AI) with sleek dramatizations, the technology they are developing is increasingly entering the real world.


Japan and U.S. block advancement in U.N. talks on autonomous weapons

The Japan Times

GENEVA – Japan, the United States and other countries have blocked any advancement in U.N. talks toward legally binding measures to ban and regulate the development and use of lethal autonomous weapon systems. The Sixth Review Conference of the Convention on Certain Conventional Weapons ended Friday in Geneva without progress, failing to reflect eight years of work and leaving countries and nongovernmental organizations that have called for legally binding rules expressing disappointment. Also referred to as "killer robots," autonomous weapons are artificial intelligence-powered weapons using facial recognition and algorithms. Once activated, the weapons can select and attack targets without the assistance of a human operator. They pose ethical, legal and security risks.


World must come together to stop killer robots, experts urge

The Independent - Tech

The world must come together to take action on killer robots, according to a new report. There is increasing agreement among various countries that fully autonomous weapons should be banned to avoid the creation of such killer robots, the new report warns. It would be "unacceptable" if weapons systems are able to select and kill targets without human oversight, the researchers warn. The research by Human Rights Watch said 30 countries had now expressed a desire for an international treaty introduced which says human control must be retained over the use of force. The new report, "Stopping Killer Robots: Country Positions on Banning Fully Autonomous Weapons and Retaining Human Control", reviews the policies of 97 countries that have publicly discussed killer robots since 2013.


Join Ethics In Technology Big Tech, Big Troubles & Big Laughs Comedy night! – Ethics In Tech

#artificialintelligence

The World Post Pandemic: How surveillance and weapon systems, used unwisely, can harm humanity! Mark Twain once wrote "against the assault of laughter nothing can stand." Ethics In Technology is planning to put Mark Twain's statement to the test about the military-industrial complex and the surveillance state. Join us for a night of thought-provoking presentations followed by some wonderful comedy. The show is hosted by nonprofit organization Ethics in Technology, a nonprofit watchdog group advocating for a world where big tech firms and technology work to serve humanity and the planet, rather than the other way around.


Nations dawdle on agreeing rules to control 'killer robots' in future wars - Reuters

#artificialintelligence

NAIROBI (Thomson Reuters Foundation) - Countries are rapidly developing "killer robots" - machines with artificial intelligence (AI) that independently kill - but are moving at a snail's pace on agreeing global rules over their use in future wars, warn technology and human rights experts. From drones and missiles to tanks and submarines, semi-autonomous weapons systems have been used for decades to eliminate targets in modern day warfare - but they all have human supervision. Nations such as the United States, Russia and Israel are now investing in developing lethal autonomous weapons systems (LAWS) which can identify, target, and kill a person all on their own - but to date there are no international laws governing their use. "Some kind of human control is necessary ... Only humans can make context-specific judgements of distinction, proportionality and precautions in combat," said Peter Maurer, President of the International Committee of the Red Cross (ICRC).


Europe Poll Supports Killer Robots Ban

#artificialintelligence

"Banning killer robots is both politically savvy and morally necessary," said Mary Wareham, the Arms Division advocacy director at Human Rights Watch and coordinator of the Campaign to Stop Killer Robots. "European states should take the lead and open ban treaty negotiations if they are serious about protecting the world from this horrific development." Countries attending the annual meeting of states parties to the Convention on Conventional Weapons (CCW) at the United Nations in Geneva will decide on November 15 whether to continue diplomatic talks on killer robots, also known as lethal autonomous weapons systems or fully autonomous weapons. Since 2014, these states have held eight meetings on lethal autonomous weapons systems under the auspices of the Convention on Conventional Weapons (CCW), a major disarmament treaty. Over the course of those meetings, states have built a shared understanding of concern, but they have struggled to reach agreement on credible recommendations for multilateral action due to the objections of a handful of military powers, most notably Russia and the United States.


A.I. experts say killer robots are the next 'weapons of mass destruction'

#artificialintelligence

A former Google software engineer is sounding the alarm on killer robots. Laura Nolan resigned from Google last year when the tech giant started working with the U.S. military on drone technology, and since then, she has joined the Campaign to Stop Killer Robots, warning that autonomous robots with lethal capabilities could become a threat to humanity. Discussions concerning possibly banning autonomous weapons fell apart on August 21 during a United Nations meeting in Geneva, when Russian diplomats allegedly made a fuss over the language that was used in a document meant to begin the process of establishing a ban. "If you're a despot, how much easier is it to have a small cadre of engineers control a fleet of autonomous weapons for you than to have to keep your troops in line?" Nolan tells Inverse. "Autonomous weapons are potential weapons of mass destruction. They need to be made taboo in the same way that chemical and biological weapons are."


Killer robots: pressure builds for ban as governments meet

The Guardian

They will be "weapons of terror, used by terrorists and rogue states against civilian populations. Unlike human soldiers, they will follow any orders however evil," says Toby Walsh, professor of artificial intelligence at the University of New South Wales, Australia. "These will be weapons of mass destruction. One programmer and a 3D printer can do what previously took an army of people. They will industrialise war, changing the speed and duration of how we can fight. They will be able to kill 24-7 and they will kill faster than humans can act to defend themselves."


Should we be worried about 'killer robots'?

Al Jazeera

Campaigners are renewing calls for a pre-emptive ban on so-called "killer robots" as representatives of more than 80 countries meet to discuss the autonomous weapons systems. The use of lethal autonomous weapons systems (LAWS) is "a step too far", said Mary Wareham, the global coordinator of the Campaign to Stop Killer Robots. "They cross a moral line, because we would see machines taking human lives on the battlefield or in law enforcement. "We want weapon systems and the use of force to remain under human control," Wareham said. Wareham spoke to Al Jazeera before Monday's meeting in Geneva, Switzerland on a possible ban on LAWS. This is the fifth international meeting to discuss so-called "killer robots" since 2014, but no formal decisions will be taken yet as countries are still working towards a common definition of LAWS, and have yet to agree on whether they should be outlawed in international law. This is going to be a crucial year. If we do not move swiftly, we could end up in a situation where it's too late and where fully autonomous weapons proliferate to the extent that every country has them," Wareham told Al Jazeera.


Robots with Guns: The Rise of Autonomous Weapons Systems

#artificialintelligence

The future of war lies in part with what the military calls "autonomous weapons systems" (AWS), sophisticated computerized devices which, as defined by the U.S. Department of Defense, "once activated, can select and engage targets without further intervention by a human operator." Whether it's a good idea or a bad one is debatable, but it isn't a question of if, but how soon autonomous, artificially intelligent machines will fight side by side with human soldiers on the battlefield. United States Army General Robert W. Cone (now deceased) predicted in 2014 that as many as one-quarter of all U.S. combat soldiers might be replaced by drones and robots within the next 30 years. In the U.S., both the Army and Marine Corps are already testing remote-controlled devices like the Modular Advanced Armed Robotic System (MAARS), an unmanned ground vehicle (UGV) designed primarily for reconnaissance that can also be equipped with a grenade launcher and a machine gun: The latter are known as lethal autonomous weapons systems (LAWS for short, or more pithily, "killer robots," as critics have dubbed them). Though they may conjure up futuristic, dystopian images redolent of The Terminator (the Arnold Schwarzenegger film about an armed super-robot from the future) or Robopocalypse (Daniel Wilson's 2011 science fiction novel about AI weapons turning on their creators), the dangers they pose are firmly rooted in reality.