SAN FRANCISCO – Researchers have discovered a disturbing pattern: dozens of ships whose GPS signals tell them they're on land -- at an airport no less -- even when they're far out to sea. An investigation released this week by the Washington D.C.-based Resilient Navigation and Timing Foundation and Windward Ltd., a maritime data and analytics company, has found multiple instances of so-called GPS spoofing in Russian waters. As recently as Monday, two vessels' GPS told them they were at Sochi Airport near the site of the 2014 Sochi Olympics, 12 miles away from the harbor where the vessels actually were. Familiar to anyone using a smartphone or built-in auto navigation system to map out a route, the satellite-based system is also the main way ships and trucking fleets find their way. While the actual intent isn't known, speculation among GPS experts has in recent weeks converged on the theory that the GPS disruption of ships is actually a side effect of efforts to protect sensitive Russian sites such as the Kremlin and Russian President Vladimir Putin's summer home from surveillance and attacks by drones.
The potential for advances in information-age technologies to undermine nuclear deterrence and influence the potential for nuclear escalation represents a critical question for international politics. One challenge is that uncertainty about the trajectory of technologies such as autonomous systems and artificial intelligence (AI) makes assessments difficult. This paper evaluates the relative impact of autonomous systems and artificial intelligence in three areas: nuclear command and control, nuclear delivery platforms and vehicles, and conventional applications of autonomous systems with consequences for nuclear stability. We argue that countries may be more likely to use risky forms of autonomy when they fear that their second-strike capabilities will be undermined. Additionally, the potential deployment of uninhabited, autonomous nuclear delivery platforms and vehicles could raise the prospect for accidents and miscalculation. Conventional military applications of autonomous systems could simultaneously influence nuclear force postures and first-strike stability in previously unanticipated ways. In particular, the need to fight at machine speed and the cognitive risk introduced by automation bias could increase the risk of unintended escalation. Finally, used properly, there should be many applications of more autonomous systems in nuclear operations that can increase reliability, reduce the risk of accidents, and buy more time for decision-makers in a crisis.
Russian President Vladimir Putin warned Friday (Sept. AI development "raises colossal opportunities and threats that are difficult to predict now," Putin said in a lecture to students, warning that "it would be strongly undesirable if someone wins a monopolist position." Future wars will be fought by autonomous drones, Putin suggested, and "when one party's drones are destroyed by drones of another, it will have no other choice but to surrender." U.N. urged to address lethal autonomous weapons AI experts worldwide are also concerned. On August 20, 116 founders of robotics and artificial intelligence companies from 26 countries, including Elon Musk and Google DeepMind's Mustafa Suleyman, signed an open letter asking the United Nations to "urgently address the challenge of lethal autonomous weapons (often called'killer robots') and ban their use internationally."