Cybersecurity was the virtual elephant in the showroom at this month's Consumer Electronics Show in Las Vegas. Attendees of the annual tech trade show, organized by the Consumer Technology Association, relished the opportunity to experience a future filled with delivery drones, autonomous vehicles, virtual and augmented reality and a plethora of "Internet of things" devices, including fridges, wearables, televisions, routers, speakers, washing machines and even robot home assistants. Given the proliferation of connected devices--already, there are estimated to be at least 6.4 billion--there remains the critical question of how to ensure their security. The cybersecurity challenge posed by the internet of things is unique. The scale of connected devices magnifies the consequences of insecurity.
It's hard to remember the days when artificial intelligence seemed like an intangible, futuristic concept. This has been decades in the making, however, and the past 90 years have seen both renaissances and winters for the field of study. At present, AI is launching a persistent infiltration into our personal lives with the rise of self-driving cars and intelligent personal assistants. In the enterprise, we likewise see AI rearing its head in adaptive marketing and cybersecurity. The rise of AI is exciting, but people often throw the term around in an attempt to win buzzword bingo, rather than to accurately reflect technological capabilities.
NHS staff using Google's search engine has triggered one of its cybersecurity defences. NHS Digital confirmed so many NHS staff use the search engine that it had started asking them to take a quiz to verify they were "not a robot". News site the Register reported one NHS Trust had told staff to "use Bing" instead. Google indicated its systems were designed to spot unusual traffic and were working as intended. Detecting suspicious traffic from one network can help defeat potential cyber-attacks, such as attempts to try to overwhelm a website.
Cashless payments are all the rage but people in Sweden have been told to squirrel away notes and coins in case of a cyber attack on the nation's banks. Digital payments offer convenience for both buyers and sellers alike and the Scandinavian nation has been an eager adopter of the technology. Now, government experts are concerned that people could be left without any money should its computer networks become victim to an attack. Sweden's Civil Contingencies Agency has issued guidance to every household telling residents to stockpile'cash in small denominations' for use in emergencies. The warning will ring alarm bells around the world as developed nations increasingly make the move to a cashless society.
Siddiqui, Md Amran (Oregon State University) | Fern, Alan (Oregon State University) | Wright, Ryan (Galois, Inc.) | Theriault, Alec (Galois, Inc.) | Archer, David (Galois, Inc.) | Maxwell, William (Galois, Inc.)
In this paper, we consider the problem of detecting unknown cyberattacks from audit data of system-level events. A key challenge is that different cyberattacks will have different suspicion indicators, which are not known beforehand. To address this we consider a multi-view anomaly detection framework, where multiple expert-designed ``views" of the data are created for capturing features that may serve as potential indicators. Anomaly detectors are then applied to each view and the results are combined to yield an overall suspiciousness ranking of system entities. Unfortunately, there is often a mismatch between what anomaly detection algorithms find and what is actually malicious, which can result in many false positives. This problem is made even worse in the multi-view setting, where only a small subset of the views may be relevant to detecting a particular cyberattack. To help reduce the false positive rate, a key contribution of this paper is to incorporate feedback from security analysts about whether proposed suspicious entities are of interest or likely benign. This feedback is incorporated into subsequent anomaly detection in order to improve the suspiciousness ranking toward entities that are truly of interest to the analyst. For this purpose, we propose an easy to implement variant of the perceptron learning algorithm, which is shown to be quite effective on benchmark datasets. We evaluate our overall approach on real attack data from a DARPA red team exercise, which include multiple attacks on multiple operating systems. The results show that the incorporation of feedback can significantly reduce the time required to identify malicious system entities.