Results


United Nations Should Ban AI-Powered Military Weapons, Elon Musk, AI Experts Urge

International Business Times

Tesla CEO Elon Musk is among several major tech industry figures and researchers who've signed an open letter urging the United Nations to regulate the use of military weapons powered by artificial intelligence. In the letter from the Future of Life Institute -- which Musk is a backer of -- the 116 signees express their concern over weapons that integrate autonomous technology and call for the U.N. to establish protections that would prevent an escalation in the development and use of these weapons. Autonomous weapons refer to military devices that utilize artificial intelligence in applications like determining targets to attack or avoid. Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.


Artificial Intelligence: Military Advisors Say AI Won't Bring About Robot Apocalypse

International Business Times

The apocalyptic future shown in sci-fi films--the ones where robots have gain consciousness and destroy humanity--is not one you need to worry about according to a report from the United States Department of Defense. The document, produced by JASON--an independent advisory group comprised of scientists and experts that brief the government on matters of science and technology--outlines trends in the field of artificial intelligence as it pertains to the U.S. military. According to the report, most computer scientists believe the possible threats posed by AI to be "at best uninformed" and those fears "do not align with the most rapidly advancing current research directions of AI as a field." It instead says these existential fears stem from a very particular--and small--part of the field of research called Artificial General Intelligence (AGI), which is defined as an AI that can successfully perform any intellectual task that a human can. The report argues we are unlikely to see the reality of an AGI come from the current artificial intelligence research and the concept "has high visibility, disproportionate to its size or present level of success."