United Nations Should Ban AI-Powered Military Weapons, Elon Musk, AI Experts Urge

International Business Times

Autonomous weapons refer to military devices that utilize artificial intelligence in applications like determining targets to attack or avoid. "We should not lose sight of the fact that, unlike other potential manifestations of AI which still remain in the realm of science fiction, autonomous weapons systems are on the cusp of development right now and have a very real potential to cause significant harm to innocent people along with global instability." For observers like the letter's signees, much of their concern over artificial intelligence isn't about science fiction hypotheticals like Gariepy alludes to. On Musk's part, the Tesla CEO has been a longtime supporter of increased regulation for artificial intelligence research and has regularly argued that, if left unchecked, it could pose a risk to the future of mankind.

Artificial Intelligence: Military Advisors Say AI Won't Bring About Robot Apocalypse

International Business Times

According to the report, most computer scientists believe the possible threats posed by AI to be "at best uninformed" and those fears "do not align with the most rapidly advancing current research directions of AI as a field." It instead says these existential fears stem from a very particular--and small--part of the field of research called Artificial General Intelligence (AGI), which is defined as an AI that can successfully perform any intellectual task that a human can. The report argues we are unlikely to see the reality of an AGI come from the current artificial intelligence research and the concept "has high visibility, disproportionate to its size or present level of success." Musk launched a nonprofit AI research company called OpenAI in 2015 and pledged $1 billion to it, with the intention of developing best practices and helping prevent potentially damaging applications of the technology.