Collaborating Authors

War Machines: Artificial Intelligence in Conflict


Having invented the first machine gun, Richard John Gatling explained (or at least justified) his invention in a letter to a friend in 1877: With such a machine, it would be possible to replace 100 men with rifles on the battlefield, greatly reducing the number of men injured or killed. This sentiment, replacing soldiers--or at least protecting them from harm to the greatest extent possible through the inventions of science and technology--has been a thoroughly American ambition since the Civil War. And now, with developments in computing, artificial intelligence and robotics, it may soon be possible to replace soldiers entirely. Only this time America is not alone and may not even be in the lead. Many countries in the world today, including Russia and China, are believed to be developing weapons that will have the ability to operate autonomously--discover a target, make the decision to engage and then attack, without human intervention.

Don't fear the robopocalypse: Autonomous weapons expert Paul Scharre


The Doomsday Clock is an internationally recognized design that conveys how close we are to destroying our civilization with dangerous technologies of our own making. First and foremost among these are nuclear weapons, but the dangers include climate-changing technologies, emerging... Read More

When Robots Can Decide Whether You Live or Die


Computers have gotten pretty good at making certain decisions for themselves. Automatic spam filters block most unwanted email. But can a machine ever be trusted to decide whether to kill a human being? It's a question taken up by the eighth episode of the Sleepwalkers podcast, which examines the AI revolution. Recent, rapid growth in the power of AI technology is causing some military experts to worry about a new generation of lethal weapons capable of independent and often opaque actions.

In Army of None, a field guide to the coming world of autonomous warfare


The Silicon Valley-military industrial complex is increasingly in the crosshairs of artificial intelligence engineers. A few weeks ago, Google was reported to be backing out of a Pentagon contract around Project Maven, which would use image recognition to automatically evaluate photos. Earlier this year, AI researchers around the world joined petitions calling for a boycott of any research that could be used in autonomous warfare. For Paul Scharre, though, such petitions barely touch the deep complexity, nuance, and ambiguity that will make evaluating autonomous weapons a major concern for defense planners this century. In Army of None, Scharre argues that the challenges around just the definitions of these machines will take enormous effort to work out between nations, let alone handling their effects.