There are two broad – and distinct – areas of application of AI and machine learning in which the ICRC has a particular interest: its use in the conduct of warfare or in other situations of violence; and its use in humanitarian action to assist and protect the victims of armed conflict. This paper sets out the ICRC's perspective on the use of AI and machine learning in armed conflict, the potential humanitarian consequences, and associated legal obligations and ethical considerations that should govern its development and use. AI and machine-learning systems could have profound implications for the role of humans in armed conflict, especially in relation to: increasing autonomy of weapon systems and other unmanned systems; new forms of cyber and information warfare; and, more broadly, the nature of decision-making. In the view of the ICRC, there is a need for a genuinely human-centred approach to any use of these technologies in armed conflict. It will be essential to preserve human control and judgement in applications of AI and machine learning for tasks and in decisions that may have serious consequences for people's lives, especially where they pose risks to life, and where the tasks or decisions are governed by rules of international humanitarian law.
Regarding Fear and Artificial Intelligence (AI), one question often comes up:'Will we be killed by a Terminator Doppelganger?' I don't know if this will happen eventually, but I do know that we already have robots fighting our wars. This century is therefore, the first time in human history that we engage in Unmanned Warfare. What is the current status of this'Unmanned Warfare'? What do people think about drone strikes and will terminators be the next step?
Weapons of war have evolved over time, but the decision to kill has always been left with humans. But with developing AI and autonomous technology, it is now possible to build killing machines that require no human input at all. Taking the final decision away from a human raises serious ethical concerns over the use of fully-autonomous weapons. It could mean wars will be less about fighting, and more extermination. In an article for The Conversation, Dr Peter Lee, Director for Security and Risk Research and Innovation at the University of Portsmouth explains the potential devastation these machines could cause.
Killer robots, whether they're the product of scaremongering or a real threat to the international power balance, now have their very own set of ethical rules. However, the newly published Pentagon guidelines on the military use of AI are unlikely to satisfy its critics. The draft guidelines were released late last week by the Defense Innovation Board (DIB), which the Department of Defense (DoD) had tasked in 2018 with producing a set of ethical rules for the use of AI in warfare. The DIB has spent the past 12 months studying AI ethics and principles with academics, lawyers, computer scientists, philosophers, and business leaders – all chaired by ex-Google CEO Eric Schmidt. What they came up with had to align with the DoD's AI Strategy published in 2018, which determines that AI should be used "in a lawful and ethical manner to promote our values".
Having invented the first machine gun, Richard John Gatling explained (or at least justified) his invention in a letter to a friend in 1877: With such a machine, it would be possible to replace 100 men with rifles on the battlefield, greatly reducing the number of men injured or killed. This sentiment, replacing soldiers--or at least protecting them from harm to the greatest extent possible through the inventions of science and technology--has been a thoroughly American ambition since the Civil War. And now, with developments in computing, artificial intelligence and robotics, it may soon be possible to replace soldiers entirely. Only this time America is not alone and may not even be in the lead. Many countries in the world today, including Russia and China, are believed to be developing weapons that will have the ability to operate autonomously--discover a target, make the decision to engage and then attack, without human intervention.