As Artificial Intelligence (AI) is coming into its own, it is creating a significant impact in our everyday lives. The use of AI in self-driving cars, industrial mechanics, space exploration and robotics are some of the examples that show how it is paving its path into the future. But the technology has also found its way into the defense industry leading to a worrisome increase in the manufacture of autonomous weapons. The so-called "thinking weapons" were described by the Air Force Gen. Paul Selva, Vice Chairman of the Joint Chiefs of Staff, in a 2016 presentation at the Center for Strategic and International Studies in the U.S. State Department of Defense. But robotic systems to do lethal harm… a Terminator without a conscience," he said while referring to the 1984 cult science fiction film starring Arnold Schwarzenegger, "Terminator."
The world's leading Artificial Intelligence (AI) and robotics experts, including Tesla's Elon Musk and Google's Mustafa Suleyman, have urged the United Nations to take action to prevent the development of killer robots before it is too late. The letter signed by 116 experts from 26 countries opens with the words, "As companies building the technologies in Artificial Intelligence and Robotics that may be repurposed to develop autonomous weapons, we feel especially responsible in raising this alarm." Though none has been build yet, conceptually a killer robot is fully autonomous and can engage, target and kill humans without any human intervention. Unlike a cruise missile or a remotely piloted drone, where humans make all the target decisions, a quadcopter with AI, for example, can search and destroy people that meet pre-defined criteria on its own. "Retaining human control over use of force is a moral imperative and essential to promote compliance with international law, and ensure accountability," Mary Wareham, advocacy director, Arms Division, Human Rights Watch, wrote in January.