Results


Cybersecurity: Keeping Up With AI and ML – Pirate Press

#artificialintelligence

USB drives are used by ransomware attackers to distribute malware across the air gap that all industrial distribution, manufacturing, and utility firms rely on as their first line of defense against cyber attacks. According to Honeywell's Industrial Cybersecurity USB Threat Report 2021, 79 percent of USB assaults have the potential to damage operational technologies (OT) that power industrial processing plants. The incidence of malware-based USB attacks is one of the most rapidly developing and difficult-to-detect threat vectors that process industries such as public utilities confront today, according to the research. As the Colonial Pipeline and JBS Foods demonstrate, this type of attack vector is particularly effective. Utility companies are also being targeted by ransomware criminals, as the thwarted water treatment plant attacks in Florida and Northern California illustrate.


Autonomous Attack Mitigation for Industrial Control Systems

arXiv.org Artificial Intelligence

Defending computer networks from cyber attack requires timely responses to alerts and threat intelligence. Decisions about how to respond involve coordinating actions across multiple nodes based on imperfect indicators of compromise while minimizing disruptions to network operations. Currently, playbooks are used to automate portions of a response process, but often leave complex decision-making to a human analyst. In this work, we present a deep reinforcement learning approach to autonomous response and recovery in large industrial control networks. We propose an attention-based neural architecture that is flexible to the size of the network under protection. To train and evaluate the autonomous defender agent, we present an industrial control network simulation environment suitable for reinforcement learning. Experiments show that the learned agent can effectively mitigate advanced attacks that progress with few observable signals over several months before execution. The proposed deep reinforcement learning approach outperforms a fully automated playbook method in simulation, taking less disruptive actions while also defending more nodes on the network. The learned policy is also more robust to changes in attacker behavior than playbook approaches.


Better cybersecurity means finding the "unknown unknowns"

MIT Technology Review

During the past few months, Microsoft Exchange servers have been like chum in a shark-feeding frenzy. Threat actors have attacked critical zero-day flaws in the email software: an unrelenting cyber campaign that the US government has described as "widespread domestic and international exploitation" that could affect hundreds of thousands of people worldwide. Gaining visibility into an issue like this requires a full understanding of all assets connected to a company's network. This type of continuous tracking of inventory doesn't scale with how humans work, but machines can handle it easily. For business executives with multiple, post-pandemic priorities, the time is now to start prioritizing security. "It's pretty much impossible these days to run almost any size company where if your IT goes down, your company is still able to run," observes Matt Kraning, chief technology officer and co-founder of Cortex Xpanse, an attack surface management software vendor recently acquired by Palo Alto Networks. You might ask why companies don't simply patch their systems and make these problems disappear. If only it were that simple. Unless businesses have implemented a way to find and keep track of their assets, that supposedly simple question is a head-scratcher. But businesses have a tough time answering what seems like a straightforward question: namely, how many routers, servers, or assets do they have? If cybersecurity executives don't know the answer, it's impossible to then convey an accurate level of vulnerability to the board of directors. And if the board doesn't understand the risk--and is blindsided by something even worse than the Exchange Server and 2020 SolarWinds attacks--well, the story almost writes itself. That's why Kraning thinks it's so important to create a minimum set of standards.


Adversarial Attacks on Machine Learning Cybersecurity Defences in Industrial Control Systems

arXiv.org Machine Learning

The proliferation and application of machine learning based Intrusion Detection Systems (IDS) have allowed for more flexibility and efficiency in the automated detection of cyber attacks in Industrial Control Systems (ICS). However, the introduction of such IDSs has also created an additional attack vector; the learning models may also be subject to cyber attacks, otherwise referred to as Adversarial Machine Learning (AML). Such attacks may have severe consequences in ICS systems, as adversaries could potentially bypass the IDS. This could lead to delayed attack detection which may result in infrastructure damages, financial loss, and even loss of life. This paper explores how adversarial learning can be used to target supervised models by generating adversarial samples using the Jacobian-based Saliency Map attack and exploring classification behaviours. The analysis also includes the exploration of how such samples can support the robustness of supervised models using adversarial training. An authentic power system dataset was used to support the experiments presented herein. Overall, the classification performance of two widely used classifiers, Random Forest and J48, decreased by 16 and 20 percentage points when adversarial samples were present. Their performances improved following adversarial training, demonstrating their robustness towards such attacks.


Can't Boil This Frog: Robustness of Online-Trained Autoencoder-Based Anomaly Detectors to Adversarial Poisoning Attacks

arXiv.org Machine Learning

In recent years, a variety of effective neural network-based methods for anomaly and cyber attack detection in industrial control systems (ICSs) have been demonstrated in the literature. Given their successful implementation and widespread use, there is a need to study adversarial attacks on such detection methods to better protect the systems that depend upon them. The extensive research performed on adversarial attacks on image and malware classification has little relevance to the physical system state prediction domain, which most of the ICS attack detection systems belong to. Moreover, such detection systems are typically retrained using new data collected from the monitored system, thus the threat of adversarial data poisoning is significant, however this threat has not yet been addressed by the research community. In this paper, we present the first study focused on poisoning attacks on online-trained autoencoder-based attack detectors. We propose two algorithms for generating poison samples, an interpolation-based algorithm and a back-gradient optimization-based algorithm, which we evaluate on both synthetic and real-world ICS data. We demonstrate that the proposed algorithms can generate poison samples that cause the target attack to go undetected by the autoencoder detector, however the ability to poison the detector is limited to a small set of attack types and magnitudes. When the poison-generating algorithms are applied to the popular SWaT dataset, we show that the autoencoder detector trained on the physical system state data is resilient to poisoning in the face of all ten of the relevant attacks in the dataset. This finding suggests that neural network-based attack detectors used in the cyber-physical domain are more robust to poisoning than in other problem domains, such as malware detection and image processing.


Intrusion Detection for Industrial Control Systems: Evaluation Analysis and Adversarial Attacks

arXiv.org Machine Learning

--Neural networks are increasingly used in security applications for intrusion detection on industrial control systems. In this work we examine two areas that must be considered for their effective use. Firstly, is their vulnerability to adversarial attacks when used in a time series setting. Secondly, is potential overestimation of performance arising from data leakage artefacts. T o investigate these areas we implement a long short-term memory (LSTM) based intrusion detection system (IDS) which effectively detects cyber-physical attacks on a water treatment testbed representing a strong baseline IDS. The first attacker is able to manipulate sensor readings on a subset of the Secure Water Treatment (SWaT) system. By creating a stream of adversarial data the attacker is able to hide the cyber-physical attacks from the IDS. For the cyber-physical attacks which are detected by the IDS, the attacker required on average 2.48 out of 12 total sensors to be compromised for the cyber-physical attacks to be hidden from the IDS. The second attacker model we explore is an L bounded attacker who can send fake readings to the IDS, but to remain imperceptible, limits their perturbations to the smallest L value needed. Additionally, we examine data leakage problems arising from tuning for F 1 score on the whole SWaT attack set and propose a method to tune detection parameters that does not utilise any attack data. If attack aftereffects are accounted for then our new parameter tuning method achieved an F 1 score of 0.811 0.0103. I NTRODUCTION Deep learning systems are known to be vulnerable to adversarial attacks at test time. By applying small changes to an input an attacker can cause a machine learning system to mis-classify with a high degree of success. There has been much work on both developing more powerful attacks [1] as well as defences [2]. However, the majority of adversarial machine learning research is focused on the image domain, with consideration of the different challenges that arise within other fields needed [3]. This phenomenon of adversarial examples becomes particularly pertinent when aiming to defend machine learn-Pre-print.


Anomaly Detection with Generative Adversarial Networks for Multivariate Time Series

arXiv.org Machine Learning

Today's Cyber-Physical Systems (CPSs) are large, complex, and affixed with networked sensors and actuators that are targets for cyber-attacks. Conventional detection techniques are unable to deal with the increasingly dynamic and complex nature of the CPSs. On the other hand, the networked sensors and actuators generate large amounts of data streams that can be continuously monitored for intrusion events. Unsupervised machine learning techniques can be used to model the system behaviour and classify deviant behaviours as possible attacks. In this work, we proposed a novel Generative Adversarial Networks-based Anomaly Detection (GAN-AD) method for such complex networked CPSs. We used LSTM-RNN in our GAN to capture the distribution of the multivariate time series of the sensors and actuators under normal working conditions of a CPS. Instead of treating each sensor's and actuator's time series independently, we model the time series of multiple sensors and actuators in the CPS concurrently to take into account of potential latent interactions between them. To exploit both the generator and the discriminator of our GAN, we deployed the GAN-trained discriminator together with the residuals between generator-reconstructed data and the actual samples to detect possible anomalies in the complex CPS. We used our GAN-AD to distinguish abnormal attacked situations from normal working conditions for a complex six-stage Secure Water Treatment (SWaT) system. Experimental results showed that the proposed strategy is effective in identifying anomalies caused by various attacks with high detection rate and low false positive rate as compared to existing methods.