Artificial intelligence (AI) is having a moment in the national security space. While the public may still equate the notion of artificial intelligence in the military context with the humanoid robots of the Terminator franchise, there has been a significant growth in discussions about the national security consequences of artificial intelligence. These discussions span academia, business, and governments, from Oxford philosopher Nick Bostrom's concern about the existential risk to humanity posed by artificial intelligence to Telsa founder Elon Musk's concern that artificial intelligence could trigger World War III to Vladimir Putin's statement that leadership in AI will be essential to global power in the 21st century. What does this really mean, especially when you move beyond the rhetoric of revolutionary change and think about the real world consequences of potential applications of artificial intelligence to militaries? Artificial intelligence is not a weapon.
War is always going to be fought with people and weapons. It is also always going to involve "platforms," such as tanks and the capabilities they have. It is important to understand that in discussions of the future of warfare the issue is not just about the person or the platform but also tying it all together. At the heart of that effort today are attempts to develop better algorithms and artificial intelligence. This will play an increasing role in war, especially in hi-tech militaries, in the future.
In popular consciousness, the idea of military AI immediately brings to mind the notion of autonomous weapon systems or "killer robots", machines that can independently target and kill humans. The possible presence of such systems on battlefields has sparked a welcome international debate on the legality and morality of using these weapon systems. The controversies surrounding autonomous weapons, however, must not obscure the fact that like most technologies, AI has a number of non-lethal uses for militaries across the world, and especially for the Indian military. These are, on the whole, not as controversial as the use of AI for autonomous weapons, and, in fact, are far more practicable at the moment, with clear demonstrable benefits.
NEW YORK--China is eroding America's military superiority and conventional deterrence through the integration of artificial intelligence (AI) systems in its military strategies, operations, and capabilities, an independent U.S. federal commission warned, adding that the United States needs to step up investment in the technology and apply it to national security missions. China's communist regime has established research and development institutes to advance its military applications of AI. Those institutes are equivalent to the Defense Advanced Research Projects Agency (DARPA)--a U.S. agency under the Department of Defense responsible for the development of emerging technologies for military use. Military applications of AI technologies are being developed by Chinese researchers in the areas of "swarming, decision support, and information operations," while the country's defense industry is pursuing the development of "increasingly autonomous weapons systems," an interim report released by The National Security Commission on Artificial Intelligence said on Nov. 4. The Chinese Communist Party (CCP) declared it would be the world leader in AI by 2030, part of its broader strategy to challenge America's military and economic position in Asia, as Beijing also pursues a process of "intelligentization" as a new imperative of its military modernization.
NAIROBI (Thomson Reuters Foundation) - Countries are rapidly developing "killer robots" - machines with artificial intelligence (AI) that independently kill - but are moving at a snail's pace on agreeing global rules over their use in future wars, warn technology and human rights experts. From drones and missiles to tanks and submarines, semi-autonomous weapons systems have been used for decades to eliminate targets in modern day warfare - but they all have human supervision. Nations such as the United States, Russia and Israel are now investing in developing lethal autonomous weapons systems (LAWS) which can identify, target, and kill a person all on their own - but to date there are no international laws governing their use. "Some kind of human control is necessary ... Only humans can make context-specific judgements of distinction, proportionality and precautions in combat," said Peter Maurer, President of the International Committee of the Red Cross (ICRC).