It's been a couple of years since AI-controlled bots fragged each other in an epic Doom deathmatch. Now, EA's Search for Extraordinary Experiences Division, or SEED, has taught self-learning AI agents to play Battlefield 1. Each character in the basic match uses a model based on neural-network training to learn how to play the game via trial and error. The AI-controlled troops in the game learned how to play after watching human players, then parallel training against other bots. The AI soldiers even learned how to pick up ammo or health when they're running low, much like you or I do.
This week, Raytheon announced it successfully tested its anti-drone technology. The advanced high-power microwave and laser dune buggy brought down 45 unmanned aerial vehicles (UAVs) and drones at a U.S. Army exercise that was held in Fort Sill, Oklahoma. The microwave system was able to bring down multiple UAVs at once when the devices swarmed, while the high energy laser (HEL) was able to identify and shoot down 12 Class I and II UAVs, as well as six different stationary devices that propelled mortar rounds. The equipment is intended to protect US troops against drones; it's self-contained and easy to deploy in a tense situation. The U.S. Air Force Research Laboratory worked with Raytheon to develop this counter-drone and UAV tech.
In a bid to modernize battlefield resources, the Chinese Army has started trialing unmanned tanks, according to a new report from state-run publication Global Times. The upgraded military vehicles are currently being tested with a distant manned console, much like remotely operated drones. However, the People's Liberation Army Ground Force, aka PLAGF, also plans to integrate them with artificial intelligence, in order to make them nearly self-operable. A short video from CCTV, a prominent state television broadcaster in the People's Republic of China, recently appeared on the internet showcasing one of the unmanned vehicles being tested. The clip features a modified version of a dated Type 59 tank moving forward and backward like a remote-controlled car and a distant Chinese Army official manning its control-box a few meters away.
Penn will now build robots for the army. The United States Army Laboratory has awarded a five-year, $27 million grant to the School of Engineering and Applied Science to create autonomous, intelligent robots designed to learn from and adapt to challenging environments. The robots will be charged with assisting humans in tasks like hostage rescue, gathering information in the wake of terrorist attacks or natural disasters, and humanitarian missions. Penn will lead the Army Research Laboratory's Distributed and Collaborative Intelligent Systems and Technology (DCIST) Collaborative Research Alliance, working alongside MIT, the Georgia Institute of Technology, and faculty members from the University of California–San Diego, the University of California–Berkeley, and the University of Southern California. The researchers will focus on advancing the field of distributed intelligence and learning, building a cohesive team of robots, sensors, and humans.
The US Army is working on a new mine detector that allows soldiers to see as well as analyze the size of an explosive hidden underground. The new device uses real-time spatial location tracking and a range of sensors to produce an image of the buried object, be it an active IED or some unexploded artillery shell. As seen in the video, the tech creates a colored map on a tablet as and when the surface is scanned by the device. The area highlighted in orange roughly represents the scale or the metallic object or a potential risk-zone, while other colors represent the safer areas. "You can immediately see the shape of the object and roughly its size," Christopher Marshall, a scientist in the Countermine Division of the Night Vision and Electronic Sensors Directorate, said in a statement.
The Army has eliminated the need to do lengthy pre-flight checks for its unmanned aerial vehicles by applying artificial intelligence algorithms on top of existing software programs. "They call it an unmanned vehicle because there's no pilot sitting up in it, but the pilot it is sitting up in a box in the desert doing the pre-flight checks and doing all the flight controls," said Walter O'Brien, CEO of Scorpion Computer Services. "We've now automated that person where you just hit a button to do that kind of stuff."
In the game, a player wishes to estimate an unknown value on a sliding scale by asking a series of questions whose answer is binary (yes or no). In this way, scientists say, their research findings could lead to new techniques for machines to ask other machines questions, or for machines and humans to query each other. ARL senior scientist Dr. Brian Sadler teamed with University of Michigan researchers Hye Won Chung, Lizhong Zheng, and Professor Alfred O. Hero to conduct the study, which appears in the February 2018 issue of the IEEE Transactions on Information Theory. The work is part of a larger study to develop methods for machines and humans to interact. "It is well known that artificial intelligence systems, such as those found nowadays on every smartphone, can answer at least some questions," Sadler said.
In popular consciousness, the idea of military AI immediately brings to mind the notion of autonomous weapon systems or "killer robots", machines that can independently target and kill humans. The possible presence of such systems on battlefields has sparked a welcome international debate on the legality and morality of using these weapon systems. The controversies surrounding autonomous weapons, however, must not obscure the fact that like most technologies, AI has a number of non-lethal uses for militaries across the world, and especially for the Indian military. These are, on the whole, not as controversial as the use of AI for autonomous weapons, and, in fact, are far more practicable at the moment, with clear demonstrable benefits.