Science fiction has a funny habit of becoming science fact after enough time has passed. The wide-eyed wonder of children sitting cross-legged in front of the TV eventually becomes inspiration for incredible feats of engineering, or the means of our own destruction. The latest example of this phenomenon is a new, powered up exoskeleton the U.S. Army is testing, per Scout. There are tons of examples of this sort of thing in science fiction. It usually involves military personnel enhancing their combat capabilities with some manner of armor or exoskeleton.
Artificial Intelligence (AI) in warfare has been growing rapidly. Several weapons now use integrated AI software to slowly reduce the number of soldiers in direct mortal peril. These weapon systems can target and attack anyone without human intervention. But, the growth of this technology is raising a few eyebrows. Several prominent scientists have already questioned the future of AI machinery simply because of the unpredictability.
Advantages of such weapons were discussed in a New York Times article published last year, which stated that speed and precision of the novel weapons could not be matched by humans. The official stance of the United States on such weapons, was discussed at the Convention on Certain Conventional Weapons (CCW) Informal Meeting of Experts on Lethal Autonomous Weapons Systems held in 2016 in Geneva, where the U.S. said that "appropriate levels" of human approval was necessary for any engagement of autonomous weapons that involved lethal force. In 2015, numerous scientists and experts signed an open letter that warned that developing such intelligent weapons could set off a global arms race. A similar letter, urging the United Nations to ban killer robots or lethal autonomous weapons, was signed by world's top artificial intelligence (AI) and robotics companies in the International Joint Conference on Artificial Intelligence (IJCAI) held in Melbourne in August.
Autonomous weapons refer to military devices that utilize artificial intelligence in applications like determining targets to attack or avoid. "We should not lose sight of the fact that, unlike other potential manifestations of AI which still remain in the realm of science fiction, autonomous weapons systems are on the cusp of development right now and have a very real potential to cause significant harm to innocent people along with global instability." For observers like the letter's signees, much of their concern over artificial intelligence isn't about science fiction hypotheticals like Gariepy alludes to. On Musk's part, the Tesla CEO has been a longtime supporter of increased regulation for artificial intelligence research and has regularly argued that, if left unchecked, it could pose a risk to the future of mankind.