This article was submitted in response to the call for ideas issued by the co-chairs of the National Security Commission on Artificial Intelligence, Eric Schmidt and Robert Work. It addresses the first question (part b.) which asks what might happen if the United States fails to develop robust AI capabilities that address national security issues. The year is 2040 and the United States military has limited artificial intelligence (AI) capability. Enthusiasm about AI's potential in the 2010s and 2020s translated into little lasting change. Domestic troubles forced a national focus on budget cuts, international isolation, and strengthening the union. Civil unrest during the 2032 elections worsened everything -- factionalism and partisanship smashed through the walls of the Pentagon. Major initiatives floundered over costs and fear of aiding political opponents.
On June 3, 1980, at about two-thirty in the morning, computers at the National Military Command Center, beneath the Pentagon, at the headquarters of the North American Air Defense Command (NORAD), deep within Cheyenne Mountain, Colorado, and at Site R, the Pentagon's alternate command post center hidden inside Raven Rock Mountain, Pennsylvania, issued an urgent warning: the Soviet Union had just launched a nuclear attack on the United States. The Soviets had recently invaded Afghanistan, and the animosity between the two superpowers was greater than at any other time since the Cuban Missile Crisis. U.S. Air Force ballistic-missile crews removed their launch keys from the safes, bomber crews ran to their planes, fighter planes took off to search the skies, and the Federal Aviation Administration prepared to order every airborne commercial airliner to land. President Jimmy Carter's national-security adviser, Zbigniew Brzezinski, was asleep in Washington, D.C., when the phone rang. His military aide, General William Odom, was calling to inform him that two hundred and twenty missiles launched from Soviet submarines were heading toward the United States. Brzezinski told Odom to get confirmation of the attack. A retaliatory strike would have to be ordered quickly; Washington might be destroyed within minutes. Odom called back and offered a correction: twenty-two hundred Soviet missiles had been launched. Brzezinski decided not to wake up his wife, preferring that she die in her sleep. As he prepared to call Carter and recommend an American counterattack, the phone rang for a third time. Odom apologized--it was a false alarm. An investigation later found that a defective computer chip in a communications device at NORAD headquarters had generated the erroneous warning. A similar false alarm had occurred the previous year, when someone mistakenly inserted a training tape, featuring a highly realistic simulation of an all-out Soviet attack, into one of NORAD's computers.
Hypersonic missiles, stealthy cruise missiles, and weaponized artificial intelligence have so reduced the amount of time that decision makers in the United States would theoretically have to respond to a nuclear attack that, two military experts say, it's time for a new US nuclear command, control, and communications system. Give artificial intelligence control over the launch button. In an article in War on the Rocks titled, ominously, "America Needs a'Dead Hand,'" US deterrence experts Adam Lowther and Curtis McGiffin propose a nuclear command, control, and communications setup with some eerie similarities to the Soviet system referenced in the title to their piece. The Dead Hand was a semiautomated system developed to launch the Soviet Union's nuclear arsenal under certain conditions, including, particularly, the loss of national leaders who could do so on their own. Given the increasing time pressure Lowther and McGiffin say US nuclear decision makers are under, "[I]t may be necessary to develop a system based on artificial intelligence, with predetermined response decisions, that detects, decides, and directs strategic forces with such speed that the attack-time compression challenge does not place the United States in an impossible position."
The Diplomat's Franz-Stefan Gady talks to Elsa B. Kania about the potential implications of artificial intelligence (AI) for the military and how the world's leading military powers -- the United States, China, and Russia -- are planning to develop and deploy AI-enabled technologies in future warfighting. Kania is an Adjunct Senior Fellow with the Technology and National Security Program at the Center for a New American Security (CNAS). Her research focuses on Chinese military innovation in emerging technologies. She is also a Research Fellow with the Center for Security and Emerging Technology at Georgetown University and a non-resident fellow with the Australian Strategic Policy Institute (ASPI). Currently, she is a Ph.D. student in Harvard University's Department of Government. Kania is the author of numerous articles and reports including Battlefield Singularity: Artificial Intelligence, Military Revolution, and China's Future Military Power and A New Sino-Russian High-Tech Partnership. Her most recent report is Securing Our 5G Future, and she also recently co-authored a policy brief AI Safety, Security, and Stability Among Great Powers. She can be followed @EBKania.
One of the most popular movie franchises of our time is the "Terminator" series, launched back in the early 1980s and featuring six-time Mr. America bodybuilder Arnold Schwarzenegger as a futuristic humanoid killing machine As noted by Great Power War, the backstory to the film is that the creation of the nearly-invincible cyborg Terminators stemmed from a "SkyNet" computer system that controlled U.S. nuclear weapons and "got smart," eventually seeing all humans as its enemy. So, in one fell swoop, the system launched its missiles at pre-programmed targets, which, of course, invited a second-strike counter-launch and created a nuclear holocaust that nearly destroyed all of humankind. While the Terminator series never really identified the'smart' SkyNet computer system as having artificial intelligence, some years later after AI became more of a thing it was understood that's the kind of system the fictional SkyNet operated. The "machine-learning" aspect of AI is how SkyNet "got smart" one day and launched the nuclear payloads it controlled. But the Terminator series are just movies, right?