Artificial Intelligence (AI) in warfare has been growing rapidly. Several weapons now use integrated AI software to slowly reduce the number of soldiers in direct mortal peril. These weapon systems can target and attack anyone without human intervention. But, the growth of this technology is raising a few eyebrows. Several prominent scientists have already questioned the future of AI machinery simply because of the unpredictability.
A coordinated international coalition of non-governmental organizations dedicated to bringing about a preemptive ban of fully autonomous weaponry -- The Campaign to Stop Killer Robots -- was started in April 2013. A breakthrough was reached in 2016 when the fifth review conference of the United Nations Convention on Conventional Weapons (CCW) saw countries hold formal talks to expand their deliberations on fully autonomous weapons. The conference also saw the establishment of a Group of Governmental Experts (GGE) chaired by India's ambassador to the U.N., Amandeep Gill. According to Human Rights Watch, over a dozen countries are developing autonomous weapon systems.
Autonomous weapons refer to military devices that utilize artificial intelligence in applications like determining targets to attack or avoid. "We should not lose sight of the fact that, unlike other potential manifestations of AI which still remain in the realm of science fiction, autonomous weapons systems are on the cusp of development right now and have a very real potential to cause significant harm to innocent people along with global instability." For observers like the letter's signees, much of their concern over artificial intelligence isn't about science fiction hypotheticals like Gariepy alludes to. On Musk's part, the Tesla CEO has been a longtime supporter of increased regulation for artificial intelligence research and has regularly argued that, if left unchecked, it could pose a risk to the future of mankind.