The history of battle knows no bounds, with weapons of destruction evolving from prehistoric clubs, axes, and spears to bombs, drones, missiles, landmines, and systems used in biological and nuclear warfare. More recently, lethal autonomous weapon systems (LAWS) powered by artificial intelligence (AI) have begun to surface, raising ethical issues about the use of AI and causing disagreement on whether such weapons should be banned in line with international humanitarian laws under the Geneva Convention. Much of the disagreement around LAWS is based on where the line should be drawn between weapons with limited human control and autonomous weapons, and differences of opinion on whether more or less people will lose their lives as a result of the implementation of LAWS. There are also contrary views on whether autonomous weapons are already in play on the battlefield. Ronald Arkin, Regents' Professor and Director of the Mobile Robot Laboratory in the College of Computing at Georgia Institute of Technology, says limited autonomy is already present in weapon systems such as the U.S. Navy's Phalanx Close-In Weapons System, which is designed to identify and fire at incoming missiles or threatening aircraft, and Israel's Harpy system, a fire-and-forget weapon designed to detect, attack, and destroy radar emitters.
Last year, Hewlett Packard Enterprise (HPE) allowed a Russian defense agency to analyze the source code of a cybersecurity software used by the Pentagon, Reuters reports. The software, a product called ArcSight, is an important piece of cyber defense for the Army, Air Force and Navy and works by alerting users to suspicious activity -- such as a high number of failed login attempts -- that might be a sign of an ongoing cyber attack. The review of the software was done by a company called Echelon for Russia's Federal Service for Technical and Export Control as HPE was seeking to sell the software in the country. While such reviews are common for outside companies looking to market these types of products in Russia, this one could have helped Russian officials find weaknesses in the software that could aid in attacks on US military cyber networks. Echelon says it's required to report software vulnerabilities to the Russian government but only after letting the software makers know.
This week, Raytheon announced it successfully tested its anti-drone technology. The advanced high-power microwave and laser dune buggy brought down 45 unmanned aerial vehicles (UAVs) and drones at a U.S. Army exercise that was held in Fort Sill, Oklahoma. The microwave system was able to bring down multiple UAVs at once when the devices swarmed, while the high energy laser (HEL) was able to identify and shoot down 12 Class I and II UAVs, as well as six different stationary devices that propelled mortar rounds. The equipment is intended to protect US troops against drones; it's self-contained and easy to deploy in a tense situation. The U.S. Air Force Research Laboratory worked with Raytheon to develop this counter-drone and UAV tech.
Although it tends look to the sky, Israel Aerospace Industries (IAI) came back down to Earth to develop RoBattle, an unmanned ground vehicle (UGV) that may soon be tasked with the type of risky missions typically assigned to foot soldiers. IAI's UGV is built to be maneuverable, dynamic, and tough. Six wheels with independent suspension enable RoBattle to scale obstacles, such as rubble and small walls, to access areas that would typically be out of reach for other robots. A modular robotic kit allows the machine to be modified and adapted with remote vehicle control, navigation, and real time mapping abilities, depending on its operational needs. RoBattle can operate independently or as support unit for convoy protection, decoy, ambush, attack, intelligence, surveillance, or armed reconnaissance, according to IAI.
After weeks of unrelenting chaos, the cybersecurity world took a little bit of a breather. There was still one of the biggest data breaches in recent memory, compliments of UnderArmour. But hey, everyone makes mistakes, including the world's most elite hackers--just ask the Russian intelligence agent behind the Guccifer 2.0 persona, whose failure to use a VPN just once outed him as GRU. Or ask people who used Monero in the early days and put too much faith in its privacy protections, which a new study says aren't as robust as they seemed, especially before a recent update. Or even ask Facebook, which left a privacy setting active for years that didn't actually do anything.