Artificial Intelligence (AI) in warfare has been growing rapidly. Several weapons now use integrated AI software to slowly reduce the number of soldiers in direct mortal peril. These weapon systems can target and attack anyone without human intervention. But, the growth of this technology is raising a few eyebrows. Several prominent scientists have already questioned the future of AI machinery simply because of the unpredictability.
Artificial intelligence and machine learning is making its way into more security products, helping organizations and individuals automate certain tasks required to keep their services and information safe. Kashyap, the senior vice president and chief product officer at Cylance--a cybersecurity firm known for its use of AI--doesn't view AI and machine learning as a replacement for human workers but rather as a supplemental service that can enable those workers to do their job more efficiently. He said there were now "billions of pieces of malware" in the wild, and "well thought-out cyber campaigns" being carried out on the regular, with targeted threats directed at individuals and organizations that require a more efficient way to check the validity of code and defend against attacks. With a widening gap between the number of security professionals needed compared to the number available--a shortage of more than 1.5 million is expected by 2020--Kashyap determined the issue no longer just required a human scale solution; it needed a computing solution.
That said, the government has recently begun to act on the issue, making a start with the security guidelines for smart homes. While it does make life easy, the fact remains that AI is based on algorithms and if a base algorithm is tampered with, AI can also be reprogrammed. Unless and until these risks are properly assessed and preventive measures to plug vulnerabilities are put in place, AI adoption needs to be closely monitored. Strict security guidelines need to be put in place by governments, while tech companies need address the issue more seriously, and start issuing regular updates to plug vulnerabilities the way they currently do for smartphones.
Advantages of such weapons were discussed in a New York Times article published last year, which stated that speed and precision of the novel weapons could not be matched by humans. The official stance of the United States on such weapons, was discussed at the Convention on Certain Conventional Weapons (CCW) Informal Meeting of Experts on Lethal Autonomous Weapons Systems held in 2016 in Geneva, where the U.S. said that "appropriate levels" of human approval was necessary for any engagement of autonomous weapons that involved lethal force. In 2015, numerous scientists and experts signed an open letter that warned that developing such intelligent weapons could set off a global arms race. A similar letter, urging the United Nations to ban killer robots or lethal autonomous weapons, was signed by world's top artificial intelligence (AI) and robotics companies in the International Joint Conference on Artificial Intelligence (IJCAI) held in Melbourne in August.
The memo cited a classified report, "DJI UAS Technology Threat and User Vulnerabilities," and a U.S. Navy memo, "Operational Risks with Regards to DJI Family of Products." The rule also applies to other items from the company, including flight computers, cameras, radios, batteries, speed controllers, GPS units, handheld control stations, and devices with DJI software applications installed. "We can confirm that guidance was issued," the U.S. Army told International Business Times on Tuesday, "however, we are currently reviewing the guidance and cannot comment further at this time." Others have expressed privacy concerns regarding data collection, as reports claimed DJI shared information with Chinese authorities, a claim the company has disputed.
The Army Aviation Engineering Directorate has issued over 300 separate Airworthiness Releases for DJI products in support of multiple organisations with a variety of mission sets. The Army ordered its units to halt the use of DJI products, including all of the company's unmanned aerial vehicles (UAV). The Department of the Army memo even reports that they have'issued over 300 separate Airworthiness Releases for DJI products in support of multiple organizations with a variety of mission sets.' Others have expressed privacy concerns regarding data collection, as reports claimed DJI shared information with Chinese authorities.
A U.S. Military DARPA program is putting $65 million into the creation of an implantable device that will provide data-transfer between human brains and the digital world. The program seeks to heighten hearing, sight and other sensory perception as well as creating a digital brain implant to relay neuron transmissions directly to digital devices. DARPA's research team acknowledged that creating an interface and communicating with the signals of one million neurons "sounds lofty," but Alveda says their research will only map out a foundation for more complex research in the future. But if we're successful in delivering rich sensory signals directly to the brain, NESD will lay a broad foundation for new neurological therapies."
Reuters reported Wednesday U.S. officials are concerned such cutting-edge technologies as artificial intelligence and machine learning could be used by the Chinese to augment their military capabilities and achieve greater advancements in strategic industries. Artificial intelligence and machine learning are seen as key components of the military drone program, which is an integral part of the fight against the Islamic State group. Reuters said it had reviewed a Pentagon report that warns China is avoiding U.S. oversight and gaining access to sensitive technology as the debate continues on strengthening the Committee on Foreign Investment in the United States, which reviews foreign acquisitions of U.S. companies based on national security considerations. An aide to Sen. John Cornyn, R-Texas, told Reuters the lawmaker is working on legislation that would give the committee, which is composed by representatives from the departments of Treasury, Defense, Justice, Homeland Security, Commerce, State and Energy, more authority to block some technology investments.
The Air Force Research Laboratory (AFRL) recently tested autonomously flying F-16 fighter jets in collaboration with Lockheed Martin. The tests could mark a big leap for military drone technology as these jets could be used in the future for large scale air-to-ground strikes. "This demonstration is an important milestone in AFRL's maturation of technologies needed to integrate manned and unmanned aircraft in a strike package. We've not only shown how an Unmanned Combat Air Vehicle can perform its mission when things go as planned, but also how it will react and adapt to unforeseen obstacles along the way," Capt. Andrew Petry, AFRL autonomous flight operations engineer, said in a press release issued Monday by Lockheed Martin.
According to the report, most computer scientists believe the possible threats posed by AI to be "at best uninformed" and those fears "do not align with the most rapidly advancing current research directions of AI as a field." It instead says these existential fears stem from a very particular--and small--part of the field of research called Artificial General Intelligence (AGI), which is defined as an AI that can successfully perform any intellectual task that a human can. The report argues we are unlikely to see the reality of an AGI come from the current artificial intelligence research and the concept "has high visibility, disproportionate to its size or present level of success." Musk launched a nonprofit AI research company called OpenAI in 2015 and pledged $1 billion to it, with the intention of developing best practices and helping prevent potentially damaging applications of the technology.