Collaborating Authors

Elsa B. Kania on Artificial Intelligence and Great Power Competition


The Diplomat's Franz-Stefan Gady talks to Elsa B. Kania about the potential implications of artificial intelligence (AI) for the military and how the world's leading military powers -- the United States, China, and Russia -- are planning to develop and deploy AI-enabled technologies in future warfighting. Kania is an Adjunct Senior Fellow with the Technology and National Security Program at the Center for a New American Security (CNAS). Her research focuses on Chinese military innovation in emerging technologies. She is also a Research Fellow with the Center for Security and Emerging Technology at Georgetown University and a non-resident fellow with the Australian Strategic Policy Institute (ASPI). Currently, she is a Ph.D. student in Harvard University's Department of Government. Kania is the author of numerous articles and reports including Battlefield Singularity: Artificial Intelligence, Military Revolution, and China's Future Military Power and A New Sino-Russian High-Tech Partnership. Her most recent report is Securing Our 5G Future, and she also recently co-authored a policy brief AI Safety, Security, and Stability Among Great Powers. She can be followed @EBKania.

Artificial Intelligence and the Military: Technology Is Only Half the Battle


Editor's Note: As 2018 comes to a close, War on the Rocks is publishing a series of year-end reflections on what our editors and contributors learned from the publication's coverage of various national security topics. These reflections will examine how War on the Rocks coverage evolved over the year, what it taught us about the issue in question, and what questions remain to be answered in 2019 and beyond. Enjoy, and see you next year! What will advances in artificial intelligence (AI) mean for national security? This year in War on the Rocks, technical and non-technical experts with academic, military, and industry perspectives grappled with the promise and peril of AI in the military and defense realms.

Military applications of artificial intelligence


Artificial intelligence (AI) is having a moment in the national security space. While the public may still equate the notion of artificial intelligence in the military context with the humanoid robots of the Terminator franchise, there has been a significant growth in discussions about the national security consequences of artificial intelligence. These discussions span academia, business, and governments, from Oxford philosopher Nick Bostrom's concern about the existential risk to humanity posed by artificial intelligence to Telsa founder Elon Musk's concern that artificial intelligence could trigger World War III to Vladimir Putin's statement that leadership in AI will be essential to global power in the 21st century. What does this really mean, especially when you move beyond the rhetoric of revolutionary change and think about the real world consequences of potential applications of artificial intelligence to militaries? Artificial intelligence is not a weapon.

Strategy, Ethics, and Trust Issues RealClearDefense


In the aftermath of the German U-boat campaign in the First World War, many in Europe and the United States argued that submarines were immoral and should be outlawed. The British Admiralty supported this view, and as Blair has described, even offered to abolish their submarine force if other nations followed suit. While British proposals to ban submarines in 1922 and 1930 were defeated, restrictions on their use where imposed that mandated that submarines could not attack a ship until such ships crews and passengers were placed in safety. This reaction to the development of a new means of war is illustrative of the type of ethical and legal challenges that must be addressed as military organizations adopt greater human-machine integration.

Real Artificial Intelligence vs. Fake Artificial Intelligence


Artificial intelligence (AI) remains a loosely-defined and often misunderstood term. We, humans, are full of biases and prejudices, like the so-called anthropomorphic mentality: "the attribution of distinctively human-like feelings, mental states, and behavioral characteristics to inanimate objects, animals, and in general to natural phenomena and supernatural entities". We like everything around us to be like us, anthropomorphizing religious figures, animals, the environment, and technological artifacts (from computational artifacts to robots), including AI, which is developed as replicating humans in the body, mind and behavior. The human body, brain, intelligence, mind and behavior, all are the (privileged) sources of inspiration for AI, both as models to emulate and goals to achieve. Thus, the human body, brain and behavior are projected on human-like AI models, algorithms and applications, or robots, with all the consequences, like a highly probable future of "Extinction", synthetics vs. humans. It is clear and plain that such an AI/ML/DL is the Highway Road to Technological Unemployment and Omnicide (Anthropogenic Human Extinction), the termination of Homo Sapiens as a species. Of all possible scenarios of omnicide as climate change, global nuclear annihilation, biological warfare, ecological collapse, and emerging technologies, as biotechnology or self-replicating nanobots, the most real one is a human-replicating machine intelligence and learning (MIL) and cognitive technologies imitating cognitive functions/skills, capacities/capabilities.