Choose the right AI method for the job

#artificialintelligence

It's hard to remember the days when artificial intelligence seemed like an intangible, futuristic concept. This has been decades in the making, however, and the past 90 years have seen both renaissances and winters for the field of study. At present, AI is launching a persistent infiltration into our personal lives with the rise of self-driving cars and intelligent personal assistants. In the enterprise, we likewise see AI rearing its head in adaptive marketing and cybersecurity. The rise of AI is exciting, but people often throw the term around in an attempt to win buzzword bingo, rather than to accurately reflect technological capabilities.


Cybersecurity in the Internet of Things is a game of incentives

#artificialintelligence

Cybersecurity was the virtual elephant in the showroom at this month's Consumer Electronics Show in Las Vegas. Attendees of the annual tech trade show, organized by the Consumer Technology Association, relished the opportunity to experience a future filled with delivery drones, autonomous vehicles, virtual and augmented reality and a plethora of "Internet of things" devices, including fridges, wearables, televisions, routers, speakers, washing machines and even robot home assistants. Given the proliferation of connected devices--already, there are estimated to be at least 6.4 billion--there remains the critical question of how to ensure their security. The cybersecurity challenge posed by the internet of things is unique. The scale of connected devices magnifies the consequences of insecurity.


Detecting Cyberattack Entities from Audit Data via Multi-View Anomaly Detection with Feedback

AAAI Conferences

In this paper, we consider the problem of detecting unknown cyberattacks from audit data of system-level events. A key challenge is that different cyberattacks will have different suspicion indicators, which are not known beforehand. To address this we consider a multi-view anomaly detection framework, where multiple expert-designed ``views" of the data are created for capturing features that may serve as potential indicators. Anomaly detectors are then applied to each view and the results are combined to yield an overall suspiciousness ranking of system entities. Unfortunately, there is often a mismatch between what anomaly detection algorithms find and what is actually malicious, which can result in many false positives. This problem is made even worse in the multi-view setting, where only a small subset of the views may be relevant to detecting a particular cyberattack. To help reduce the false positive rate, a key contribution of this paper is to incorporate feedback from security analysts about whether proposed suspicious entities are of interest or likely benign. This feedback is incorporated into subsequent anomaly detection in order to improve the suspiciousness ranking toward entities that are truly of interest to the analyst. For this purpose, we propose an easy to implement variant of the perceptron learning algorithm, which is shown to be quite effective on benchmark datasets. We evaluate our overall approach on real attack data from a DARPA red team exercise, which include multiple attacks on multiple operating systems. The results show that the incorporation of feedback can significantly reduce the time required to identify malicious system entities.


Tech Advances Make It Easier to Assign Blame for Cyberattacks

WSJ.com: WSJD - Technology

"All you have to do is look at the attacks that have taken place recently--WannaCry, NotPetya and others--and see how quickly the industry and government is coming out and assigning responsibility to nation states such as North Korea, Russia and Iran," said Dmitri Alperovitch, chief technology officer at CrowdStrike Inc., a cybersecurity company that has investigated a number of state-sponsored hacks. The White House and other countries took roughly six months to blame North Korea and Russia for the WannaCry and NotPetya attacks, respectively, while it took about three years for U.S. authorities to indict a North Korean hacker for the 2014 attack against Sony . Forensic systems are gathering and analyzing vast amounts of data from digital databases and registries to glean clues about an attacker's infrastructure. These clues, which may include obfuscation techniques and domain names used for hacking, can add up to what amounts to a unique footprint, said Chris Bell, chief executive of Diskin Advanced Technologies, a startup that uses machine learning to attribute cyberattacks. Additionally, the increasing amount of data related to cyberattacks--including virus signatures, the time of day the attack took place, IP addresses and domain names--makes it easier for investigators to track organized hacking groups and draw conclusions about them.


Seeing AI to AI: Artificial Intelligence and its Impact on Cybersecurity

#artificialintelligence

Have you ever uploaded a photo of you and your friends to Facebook, only to see that Facebook has self-identified your friends in the photo and asked permission to tag them? You likely use other forms of AI throughout your day without even realizing it: through Siri's speech recognition, Google's search engine, and even through spam filters that clean up your email inbox. These are all forms of what we call narrow AI - technology that is set to perform a specific task - as opposed to general AI - which is meant to solve broader and more complex problems. AI is often utilized in cases of classification and forecasting. Classification involves organizing data and assigning labels through pattern matching, while forecasting makes a prediction for the future based on known data.