Goto

Collaborating Authors

How Artificial Intelligence Will Make Decisions In Tomorrow's Wars

#artificialintelligence

Yes, companies use AI to automate various tasks, while consumers use AI to make their daily routines easier. But governments–and in particular militaries–also have a massive interest in the speed and scale offered by AI. Nation states are already using artificial intelligence to monitor their own citizens, and as the U.K.'s Ministry of Defence (MoD) revealed last week, they'll also be using AI to make decisions related to national security and warfare. The MoD's Defence and Security Accelerator (DASA) has announced the initial injection of £4 million in funding for new projects and startups exploring how to use AI in the context of the British Navy. In particular, the DASA is looking to support AI- and machine learning-based technology that will "revolutionise the way warships make decisions and process thousands of strands of intelligence and data."


How Artificial Intelligence Will Make Decisions In Tomorrow's Wars

#artificialintelligence

Yes, companies use AI to automate various tasks, while consumers use AI to make their daily routines easier. But governments–and in particular militaries–also have a massive interest in the speed and scale offered by AI. Nation states are already using artificial intelligence to monitor their own citizens, and as the UK's Ministry of Defence (MoD) revealed last week, they'll also be using AI to make decisions related to national security and warfare. The MoD's Defence and Security Accelerator (DASA) has announced the initial injection of £4 million in funding for new projects and startups exploring how to use AI in the context of the British Navy. In particular, the DASA is looking to support AI- and machine learning-based technology that will "revolutionise the way warships make decisions and process thousands of strands of intelligence and data."


Military applications of artificial intelligence

#artificialintelligence

Artificial intelligence (AI) is having a moment in the national security space. While the public may still equate the notion of artificial intelligence in the military context with the humanoid robots of the Terminator franchise, there has been a significant growth in discussions about the national security consequences of artificial intelligence. These discussions span academia, business, and governments, from Oxford philosopher Nick Bostrom's concern about the existential risk to humanity posed by artificial intelligence to Telsa founder Elon Musk's concern that artificial intelligence could trigger World War III to Vladimir Putin's statement that leadership in AI will be essential to global power in the 21st century. What does this really mean, especially when you move beyond the rhetoric of revolutionary change and think about the real world consequences of potential applications of artificial intelligence to militaries? Artificial intelligence is not a weapon.


Remarks by High Representative/Vice-President Federica Mogherini at the press conference following the Informal Meeting of EU Defence Ministers

#artificialintelligence

Let me start by thanking Antti [Kaikkonen, Minister of Defence of Finland] and all the Finnish colleagues for an excellent couple of days – 24 hours - of this informal meeting of the European Union Member States' Defence Ministers. It has been extremely productive and intense. Our agenda has been very heavy – heavy in terms of content, but light in terms of the kind of approach and relations we have had. The wonderful Helsinki sun has helped establishing a friendly atmosphere and I would say that the exchanges have been extremely consensual, productive and positive. Thank you for that, because your hospitality has contributed to set a positive and constructive tone.


Hitting the Books: How American militarism and new technology may make war more likely

Engadget

There's nobody better at persecuting a war than the United States -- we've got the the best-equipped and biggest-budgeted fighting force on the face of the Earth. But does carrying the biggest stick still constitute a strategic advantage if the mere act of possessing it seems to make us more inclined to use it? In his latest book, Future Peace (sequel to 2017's Future War) Dr. Robert H. Latiff, Maj Gen USAF (Ret), explores how the American military's increasing reliance on weaponized drones, AI and Machine Learning systems, automation and similar cutting-edge technologies, when paired with an increasingly rancorous and often outright hostile global political environment, could create the perfect conditions for getting a lot of people killed. In the excerpt below, Dr. Latiff looks at the impact that America's lionization of its armed forces in the post-Vietnam era and new access to unproven tech have on our ability to mitigate conflict and prevent armed violence. Published by University of Notre Dame Press.