Artificial Intelligence (AI) in warfare has been growing rapidly. Several weapons now use integrated AI software to slowly reduce the number of soldiers in direct mortal peril. These weapon systems can target and attack anyone without human intervention. But, the growth of this technology is raising a few eyebrows. Several prominent scientists have already questioned the future of AI machinery simply because of the unpredictability.
As Artificial Intelligence (AI) is coming into its own, it is creating a significant impact in our everyday lives. The use of AI in self-driving cars, industrial mechanics, space exploration and robotics are some of the examples that show how it is paving its path into the future. But the technology has also found its way into the defense industry leading to a worrisome increase in the manufacture of autonomous weapons. The so-called "thinking weapons" were described by the Air Force Gen. Paul Selva, Vice Chairman of the Joint Chiefs of Staff, in a 2016 presentation at the Center for Strategic and International Studies in the U.S. State Department of Defense. But robotic systems to do lethal harm… a Terminator without a conscience," he said while referring to the 1984 cult science fiction film starring Arnold Schwarzenegger, "Terminator."
The world's leading Artificial Intelligence (AI) and robotics experts, including Tesla's Elon Musk and Google's Mustafa Suleyman, have urged the United Nations to take action to prevent the development of killer robots before it is too late. The letter signed by 116 experts from 26 countries opens with the words, "As companies building the technologies in Artificial Intelligence and Robotics that may be repurposed to develop autonomous weapons, we feel especially responsible in raising this alarm." Though none has been build yet, conceptually a killer robot is fully autonomous and can engage, target and kill humans without any human intervention. Unlike a cruise missile or a remotely piloted drone, where humans make all the target decisions, a quadcopter with AI, for example, can search and destroy people that meet pre-defined criteria on its own. "Retaining human control over use of force is a moral imperative and essential to promote compliance with international law, and ensure accountability," Mary Wareham, advocacy director, Arms Division, Human Rights Watch, wrote in January.
Tesla CEO Elon Musk is among several major tech industry figures and researchers who've signed an open letter urging the United Nations to regulate the use of military weapons powered by artificial intelligence. In the letter from the Future of Life Institute -- which Musk is a backer of -- the 116 signees express their concern over weapons that integrate autonomous technology and call for the U.N. to establish protections that would prevent an escalation in the development and use of these weapons. Autonomous weapons refer to military devices that utilize artificial intelligence in applications like determining targets to attack or avoid. Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.