Collaborating Authors

Don't fear the robopocalypse: Autonomous weapons expert Paul Scharre


The Doomsday Clock is an internationally recognized design that conveys how close we are to destroying our civilization with dangerous technologies of our own making. First and foremost among these are nuclear weapons, but the dangers include climate-changing technologies, emerging... Read More

Thousands of scientists pledge not to help build killer AI robots


Thousands of scientists who specialise in artificial intelligence (AI) have declared that they will not participate in the development or manufacture of robots that can identify and attack people without human oversight. Demis Hassabis at Google DeepMind and Elon Musk at the US rocket company SpaceX are among more than 2,400 signatories to the pledge which intends to deter military firms and nations from building lethal autonomous weapon systems, also known as Laws. The move is the latest from concerned scientists and organisations to highlight the dangers of handing over life and death decisions to AI-enhanced machines. It follows calls for a preemptive ban on technology that campaigners believe could usher in a new generation of weapons of mass destruction. Orchestrated by the Boston-based organisation, The Future of Life Institute, the pledge calls on governments to agree norms, laws and regulations that stigmatise and effectively outlaw the development of killer robots.

Lethal Microdrones, Dystopian Futures, and the Autonomous Weapons Debate

IEEE Spectrum Robotics

This week, the first meeting of the Convention on Conventional Weapons (CCW) Group of Governmental Experts on lethal autonomous weapons systems is taking place at the United Nations in Geneva. Organizations like the Campaign to Stop Killer Robots are encouraging the UN to move forward on international regulation of autonomous weapons, which is great, because talking about how these issues will shape the future of robotics and society is a very important thing.

AI Experts Warn Autonomous Weapons May Become 'Third Revolution In Warfare'

International Business Times

Artificial Intelligence (AI) in warfare has been growing rapidly. Several weapons now use integrated AI software to slowly reduce the number of soldiers in direct mortal peril. These weapon systems can target and attack anyone without human intervention. But, the growth of this technology is raising a few eyebrows. Several prominent scientists have already questioned the future of AI machinery simply because of the unpredictability.

Tech leaders call for autonomous weapons ban

Al Jazeera

Thousands of the world's pre-eminent technology experts called for a global ban on the development of lethal autonomous weapons, warning they could become instruments of "violence and oppression". More than 2,400 individuals and 150 companies from 90 different countries vowed to play no part in the construction, trade, or use of autonomous weapons in a pledge signed on Wednesday at the 2018 International Joint Conference on Artificial Intelligence in Stockholm, Sweden. Elon Musk, CEO of SpaceX and Tesla, and representatives of Google's DeepMind subsidiary were among supporters of the pledge. "The decision to take a human life should never be delegated to a machine," a statement said. "Lethal autonomous weapons - selecting and engaging targets without human intervention - would be dangerously destabilising for every country and individual."