Z Advanced Computing, Inc. (ZAC) of Potomac, MD announced on August 27 that it is funded by the US Air Force, to use ZAC's detailed 3D image recognition technology, based on Explainable-AI, for drones (unmanned aerial vehicle or UAV) for aerial image/object recognition. ZAC is the first to demonstrate Explainable-AI, where various attributes and details of 3D (three dimensional) objects can be recognized from any view or angle. "With our superior approach, complex 3D objects can be recognized from any direction, using only a small number of training samples," said Dr. Saied Tadayon, CTO of ZAC. "For complex tasks, such as drone vision, you need ZAC's superior technology to handle detailed 3D image recognition." "You cannot do this with the other techniques, such as Deep Convolutional Neural Networks, even with an extremely large number of training samples. That's basically hitting the limits of the CNNs," continued Dr. Bijan Tadayon, CEO of ZAC.
The history of battle knows no bounds, with weapons of destruction evolving from prehistoric clubs, axes, and spears to bombs, drones, missiles, landmines, and systems used in biological and nuclear warfare. More recently, lethal autonomous weapon systems (LAWS) powered by artificial intelligence (AI) have begun to surface, raising ethical issues about the use of AI and causing disagreement on whether such weapons should be banned in line with international humanitarian laws under the Geneva Convention. Much of the disagreement around LAWS is based on where the line should be drawn between weapons with limited human control and autonomous weapons, and differences of opinion on whether more or less people will lose their lives as a result of the implementation of LAWS. There are also contrary views on whether autonomous weapons are already in play on the battlefield. Ronald Arkin, Regents' Professor and Director of the Mobile Robot Laboratory in the College of Computing at Georgia Institute of Technology, says limited autonomy is already present in weapon systems such as the U.S. Navy's Phalanx Close-In Weapons System, which is designed to identify and fire at incoming missiles or threatening aircraft, and Israel's Harpy system, a fire-and-forget weapon designed to detect, attack, and destroy radar emitters.
In popular consciousness, the idea of military AI immediately brings to mind the notion of autonomous weapon systems or "killer robots", machines that can independently target and kill humans. The possible presence of such systems on battlefields has sparked a welcome international debate on the legality and morality of using these weapon systems. The controversies surrounding autonomous weapons, however, must not obscure the fact that like most technologies, AI has a number of non-lethal uses for militaries across the world, and especially for the Indian military. These are, on the whole, not as controversial as the use of AI for autonomous weapons, and, in fact, are far more practicable at the moment, with clear demonstrable benefits.
On Monday, President Trump signed the the $717 billion annual National Defense Authorization Act, which was easily passed by Congress in weeks prior. Much attention has understandably been placed on big-ticket items like $7.6 billion for acquiring 77 F-35 fighters, $21.9 billion for the nuclear weapons program, and $1.56 billion for three littoral combat ships--despite the fact that the Navy requested only one in the budget. What has gotten less attention is how the bill cements artificial intelligence programs in the Defense Department and lays the groundwork for a new national-level policy and strategy in the form of an artificial intelligence commission. As artificial intelligence and machine learning algorithms are integrated into defense technology, spending on these technologies is only going to increase in years to come. While spending for many AI programs in the NDAA is in the tens of millions at present, one budget for a project that did not go through the normal appropriations process could have a total cost of $1.75 billion over the next seven years.
Google has quietly secured a contract to work on the Defense Department's new algorithmic warfare initiative, providing assistance with a pilot project to apply its artificial intelligence solutions to drone targeting. The military contract with Google is routed through a Northern Virginia technology staffing company called ECS Federal, obscuring the relationship from the public. The contract, first reported Tuesday by Gizmodo, is part of a rapid push by the Pentagon to deploy state-of-the-art artificial intelligence technology to improve combat performance. Google, which has made strides in applying its proprietary deep learning tools to improve language translation, and vision recognition, has a cross-team collaboration within the company to work on the AI drone project. The team, The Intercept has learned, is working to develop deep learning technology to help drone analysts interpret the vast image data vacuumed up from the military's fleet of 1,100 drones to better target bombing strikes against the Islamic State.