AI can detect signs of nuclear weapons testing banned under the Comprehensive Nuclear-Test-Ban Treaty, according to research from the US Pacific Northwest National Laboratory. A group of scientists have built a neural network to sniff out any unusual nuclear activity. Researchers from the Pacific Northwest National Laboratory (PNNL), one of the United States Department of Energy national laboratories, decided to see if they could use deep learning to sort through the different nuclear decay events to identify any suspicious behavior. The lab, buried beneath 81 feet of concrete, rock and earth, is blocked out from energy from cosmic rays, electronics and other sources. It means that the data collected is less noisy, making it easier to pinpoint unusual activity.
With each passing year, our sector continues to demonstrate its evolving approach to fighting cyber threats, as cyber crime itself continues to evolve. As both business and government move forward with digital transformation initiatives to improve processes and efficiency, the overall security attack surface continues to expand with more potential points of access for criminals to exploit. However, our industry is tackling these challenges head-on, with numerous innovative solutions continuing to come to market. So, what have been the key trends of 2018 thus far? From attending trade shows, to speaking to customers, partners, analysts and the media, several examples have come to the forefront.
Artificial intelligence and machine learning cybersecurity technologies hold the key to stopping today's advanced threats. Enterprises are failing to keep up with the evolving threat landscape, massive data breaches and global cyberwarfare, the speed and magnitude of which are constantly increasing. Too many businesses still rely on internal resources and hardware that has a short shelf life and limited capacity. Hackers, on the other hand, are innovative. The only way to fight back is to use their own tactics against them, replacing antiquated and obsolete systems with adaptive technology.
Computer boffins have devised a potential hardware-based Trojan attack on neural network models that could be used to alter system output without detection. Adversarial attacks on neural networks and related deep learning systems have received considerable attention in recent years due to the growing use of AI-oriented systems. The researchers – doctoral student Joseph Clements and assistant professor of electrical and computer engineering Yingjie Lao at Clemson University in the US – say that they've come up with a novel threat model by which an attacker could maliciously modify hardware in the supply chain to interfere with the output of machine learning models run on the device. Attacks that focus on the supply chain appear to be fairly uncommon. In 2014, reports surfaced that the US National Security Agency participates in supply chain interdiction to intercept hardware in transit and insert backdoors.
In 2017, the world witnessed a cyberattack of hideous proportions. The WannaCry ransomware attack infected hundreds of thousands of computers in more than 150 countries, throwing a wrench in the digital gears of many businesses and bringing several industries to their knees with malicious software designed to block access to files until a "ransom" was paid. One industry that was hit particularly hard was health care, including organizations such as the National Health Service (NHS) in the U.K. and Merck in the U.S. One study found that last year, 78 percent of health-care providers reported a ransomware or malware attack. And perhaps we shouldn't be surprised: Patient records are filled with valuable and private information, and ineffective cybersecurity measures make it far too easy to hold those records hostage. Health care is an easy target for malware.
TradeRev-H, an enhanced version of ADESA's TradeRev app for one-hour online auctions, seeks to tap machine learning to change how vehicle inspections are done. KAR Auction Services Inc. unveiled the enhancements to TradeRev, based in North York, Ont., near Toronto, in March before the NADA Show. KAR Auction Services Inc. in October 2017 acquired full ownership of the Canada-based TradeRev technology that allows dealers to electronically bid on and purchase used vehicles before they reach physical auctions. To indicate that the technology is groundbreaking, the "H" is for Grace Hopper, a U.S. Navy rear admiral credited with developing the precursor to the Common Business Oriented Language, or COBOL, computer- programming language. "For any digital remarketing channel, the visual condition report is key," said Peter Kelly, KAR's chief technology officer.
TradeRev-H, an enhanced version of ADESA's TradeRev app for one-hour online auctions, seeks to tap machine learning to change how vehicle inspections are done. KAR Auction Services Inc. unveiled the enhancements to TradeRev in March before the NADA Show. To indicate that the technology is groundbreaking, the "H" is for Grace Hopper, a U.S. Navy rear admiral credited with developing the precursor to the Common Business Oriented Language, or COBOL, computer- programming language. "For any digital remarketing channel, the visual condition report is key," said Peter Kelly, KAR's chief technology officer. Until now, that often has meant the use of third-party inspectors.
The central prediction I want to make and defend in this post is that continued rapid progress in machine learning will drive the emergence of a new kind of geopolitics; I have been calling it AI Nationalism. Machine learning is an omni-use technology that will come to touch all sectors and parts of society. The transformation of both the economy and the military by machine learning will create instability at the national and international level forcing governments to act. AI policy will become the single most important area of government policy. An accelerated arms race will emerge between key countries and we will see increased protectionist state action to support national champions, block takeovers by foreign firms and attract talent.
People can differ on their perceptions of "evil." People can also change their minds. Still, it's hard to wrap one's head around how Google, famous for its "don't be evil" company motto, dealt with a small Defense Department contract involving artificial intelligence. Facing a backlash from employees, including an open letter insisting the company "should not be in the business of war," Google in April grandly defended involvement in a project "intended to save lives and save people from having to do highly tedious work." Less than two months later, chief executive officer Sundar Pichai announced that the contract would not be renewed, writing equally grandly that Google would shun AI applications for "weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people."