Last year, Hewlett Packard Enterprise (HPE) allowed a Russian defense agency to analyze the source code of a cybersecurity software used by the Pentagon, Reuters reports. The software, a product called ArcSight, is an important piece of cyber defense for the Army, Air Force and Navy and works by alerting users to suspicious activity -- such as a high number of failed login attempts -- that might be a sign of an ongoing cyber attack. The review of the software was done by a company called Echelon for Russia's Federal Service for Technical and Export Control as HPE was seeking to sell the software in the country. While such reviews are common for outside companies looking to market these types of products in Russia, this one could have helped Russian officials find weaknesses in the software that could aid in attacks on US military cyber networks. Echelon says it's required to report software vulnerabilities to the Russian government but only after letting the software makers know.
The history of battle knows no bounds, with weapons of destruction evolving from prehistoric clubs, axes, and spears to bombs, drones, missiles, landmines, and systems used in biological and nuclear warfare. More recently, lethal autonomous weapon systems (LAWS) powered by artificial intelligence (AI) have begun to surface, raising ethical issues about the use of AI and causing disagreement on whether such weapons should be banned in line with international humanitarian laws under the Geneva Convention. Much of the disagreement around LAWS is based on where the line should be drawn between weapons with limited human control and autonomous weapons, and differences of opinion on whether more or less people will lose their lives as a result of the implementation of LAWS. There are also contrary views on whether autonomous weapons are already in play on the battlefield. Ronald Arkin, Regents' Professor and Director of the Mobile Robot Laboratory in the College of Computing at Georgia Institute of Technology, says limited autonomy is already present in weapon systems such as the U.S. Navy's Phalanx Close-In Weapons System, which is designed to identify and fire at incoming missiles or threatening aircraft, and Israel's Harpy system, a fire-and-forget weapon designed to detect, attack, and destroy radar emitters.
This week, Raytheon announced it successfully tested its anti-drone technology. The advanced high-power microwave and laser dune buggy brought down 45 unmanned aerial vehicles (UAVs) and drones at a U.S. Army exercise that was held in Fort Sill, Oklahoma. The microwave system was able to bring down multiple UAVs at once when the devices swarmed, while the high energy laser (HEL) was able to identify and shoot down 12 Class I and II UAVs, as well as six different stationary devices that propelled mortar rounds. The equipment is intended to protect US troops against drones; it's self-contained and easy to deploy in a tense situation. The U.S. Air Force Research Laboratory worked with Raytheon to develop this counter-drone and UAV tech.
Although it tends look to the sky, Israel Aerospace Industries (IAI) came back down to Earth to develop RoBattle, an unmanned ground vehicle (UGV) that may soon be tasked with the type of risky missions typically assigned to foot soldiers. IAI's UGV is built to be maneuverable, dynamic, and tough. Six wheels with independent suspension enable RoBattle to scale obstacles, such as rubble and small walls, to access areas that would typically be out of reach for other robots. A modular robotic kit allows the machine to be modified and adapted with remote vehicle control, navigation, and real time mapping abilities, depending on its operational needs. RoBattle can operate independently or as support unit for convoy protection, decoy, ambush, attack, intelligence, surveillance, or armed reconnaissance, according to IAI.
The United States Air Force wants robots and service members to be best buds on the battlefield. Last Friday, the Air Force announced a grant of 7.5 million for research on ways to make humans trust artificial intelligence (AI) so that people and machines can collaborate on missions. Soon service members in every branch of the Armed Forces will be working with AI on a daily basis--be it unmanned ariel vehicles, underwater drones, or robot soldiers (the U.S. Military had to shelve the Boston Dynamic L3 "robotic mule" because it was too loud, but last week the company revealed the much stealthier SpotMini). On November 1, 2014, (one week after Elon Musk compared developing AI to "summoning the demon") Undersecretary of Defense Frank Kendall issued a memo asking the Defense Science Board to study what issues must be solved in order to expand the use of AI "across all war-fighting domains." But robotic weapons and soldiers wont be as effective if their human counterparts don't trust them.