Radar used to be a slow science. Electronic warfare is a blanket term that encompasses the radar signals used to detect an attack, the radios used to communicate that the attack is coming, and the specific radio interference sent to confuse enemy radars as they're attacking. And in the Cold War, every part of this used to be analog. "In Vietnam we learned what an SA-2 radar signal started looking like," Joshua Niedzwiecki, director of the Sensor Processing and Exploitation group at BAE Systems, tells Popular Science. The SA-2 is a surface to air missile that destroyed a lot of U.S. Air Force planes, especially B-52 bombers, over Vietnam.
The history of battle knows no bounds, with weapons of destruction evolving from prehistoric clubs, axes, and spears to bombs, drones, missiles, landmines, and systems used in biological and nuclear warfare. More recently, lethal autonomous weapon systems (LAWS) powered by artificial intelligence (AI) have begun to surface, raising ethical issues about the use of AI and causing disagreement on whether such weapons should be banned in line with international humanitarian laws under the Geneva Convention. Much of the disagreement around LAWS is based on where the line should be drawn between weapons with limited human control and autonomous weapons, and differences of opinion on whether more or less people will lose their lives as a result of the implementation of LAWS. There are also contrary views on whether autonomous weapons are already in play on the battlefield. Ronald Arkin, Regents' Professor and Director of the Mobile Robot Laboratory in the College of Computing at Georgia Institute of Technology, says limited autonomy is already present in weapon systems such as the U.S. Navy's Phalanx Close-In Weapons System, which is designed to identify and fire at incoming missiles or threatening aircraft, and Israel's Harpy system, a fire-and-forget weapon designed to detect, attack, and destroy radar emitters.
Z Advanced Computing, Inc. (ZAC) of Potomac, MD announced on August 27 that it is funded by the US Air Force, to use ZAC's detailed 3D image recognition technology, based on Explainable-AI, for drones (unmanned aerial vehicle or UAV) for aerial image/object recognition. ZAC is the first to demonstrate Explainable-AI, where various attributes and details of 3D (three dimensional) objects can be recognized from any view or angle. "With our superior approach, complex 3D objects can be recognized from any direction, using only a small number of training samples," said Dr. Saied Tadayon, CTO of ZAC. "For complex tasks, such as drone vision, you need ZAC's superior technology to handle detailed 3D image recognition." "You cannot do this with the other techniques, such as Deep Convolutional Neural Networks, even with an extremely large number of training samples. That's basically hitting the limits of the CNNs," continued Dr. Bijan Tadayon, CEO of ZAC.
Software star-up, Z Advanced Computing, Inc. (ZAC), has received funding from the U.S. Air Force to incorporate the company's 3D image recognition technology into unmanned aerial vehicles (UAVs) and drones for aerial image and object recognition. ZAC's in-house image recognition software is based on Explainable-AI (XAI), where computer-generated image results can be understood by human experts. ZAC – based in Potomac, Maryland – is the first to demonstrate XAI, where various attributes and details of 3D objects can be recognized from any view or angle. "With our superior approach, complex 3D objects can be recognized from any direction, using only a small number of training samples," says Dr. Saied Tadayon, CTO of ZAC. "You cannot do this with the other techniques, such as deep Convolutional Neural Networks (CNNs), even with an extremely large number of training samples. That's basically hitting the limits of the CNNs," adds Dr. Bijan Tadayon, CEO of ZAC.
The United States Air Force wants robots and service members to be best buds on the battlefield. Last Friday, the Air Force announced a grant of 7.5 million for research on ways to make humans trust artificial intelligence (AI) so that people and machines can collaborate on missions. Soon service members in every branch of the Armed Forces will be working with AI on a daily basis--be it unmanned ariel vehicles, underwater drones, or robot soldiers (the U.S. Military had to shelve the Boston Dynamic L3 "robotic mule" because it was too loud, but last week the company revealed the much stealthier SpotMini). On November 1, 2014, (one week after Elon Musk compared developing AI to "summoning the demon") Undersecretary of Defense Frank Kendall issued a memo asking the Defense Science Board to study what issues must be solved in order to expand the use of AI "across all war-fighting domains." But robotic weapons and soldiers wont be as effective if their human counterparts don't trust them.