Not enough data to create a plot.
Try a different view from the menu above.
Mern, John, Hatch, Kyle, Silva, Ryan, Hickert, Cameron, Sookoor, Tamim, Kochenderfer, Mykel J.
Defending computer networks from cyber attack requires timely responses to alerts and threat intelligence. Decisions about how to respond involve coordinating actions across multiple nodes based on imperfect indicators of compromise while minimizing disruptions to network operations. Currently, playbooks are used to automate portions of a response process, but often leave complex decision-making to a human analyst. In this work, we present a deep reinforcement learning approach to autonomous response and recovery in large industrial control networks. We propose an attention-based neural architecture that is flexible to the size of the network under protection. To train and evaluate the autonomous defender agent, we present an industrial control network simulation environment suitable for reinforcement learning. Experiments show that the learned agent can effectively mitigate advanced attacks that progress with few observable signals over several months before execution. The proposed deep reinforcement learning approach outperforms a fully automated playbook method in simulation, taking less disruptive actions while also defending more nodes on the network. The learned policy is also more robust to changes in attacker behavior than playbook approaches.
During the past few months, Microsoft Exchange servers have been like chum in a shark-feeding frenzy. Threat actors have attacked critical zero-day flaws in the email software: an unrelenting cyber campaign that the US government has described as "widespread domestic and international exploitation" that could affect hundreds of thousands of people worldwide. Gaining visibility into an issue like this requires a full understanding of all assets connected to a company's network. This type of continuous tracking of inventory doesn't scale with how humans work, but machines can handle it easily. For business executives with multiple, post-pandemic priorities, the time is now to start prioritizing security. "It's pretty much impossible these days to run almost any size company where if your IT goes down, your company is still able to run," observes Matt Kraning, chief technology officer and co-founder of Cortex Xpanse, an attack surface management software vendor recently acquired by Palo Alto Networks. You might ask why companies don't simply patch their systems and make these problems disappear. If only it were that simple. Unless businesses have implemented a way to find and keep track of their assets, that supposedly simple question is a head-scratcher. But businesses have a tough time answering what seems like a straightforward question: namely, how many routers, servers, or assets do they have? If cybersecurity executives don't know the answer, it's impossible to then convey an accurate level of vulnerability to the board of directors. And if the board doesn't understand the risk--and is blindsided by something even worse than the Exchange Server and 2020 SolarWinds attacks--well, the story almost writes itself. That's why Kraning thinks it's so important to create a minimum set of standards.
Zizzo, Giulio, Hankin, Chris, Maffeis, Sergio, Jones, Kevin
--Neural networks are increasingly used in security applications for intrusion detection on industrial control systems. In this work we examine two areas that must be considered for their effective use. Firstly, is their vulnerability to adversarial attacks when used in a time series setting. Secondly, is potential overestimation of performance arising from data leakage artefacts. T o investigate these areas we implement a long short-term memory (LSTM) based intrusion detection system (IDS) which effectively detects cyber-physical attacks on a water treatment testbed representing a strong baseline IDS. The first attacker is able to manipulate sensor readings on a subset of the Secure Water Treatment (SWaT) system. By creating a stream of adversarial data the attacker is able to hide the cyber-physical attacks from the IDS. For the cyber-physical attacks which are detected by the IDS, the attacker required on average 2.48 out of 12 total sensors to be compromised for the cyber-physical attacks to be hidden from the IDS. The second attacker model we explore is an L bounded attacker who can send fake readings to the IDS, but to remain imperceptible, limits their perturbations to the smallest L value needed. Additionally, we examine data leakage problems arising from tuning for F 1 score on the whole SWaT attack set and propose a method to tune detection parameters that does not utilise any attack data. If attack aftereffects are accounted for then our new parameter tuning method achieved an F 1 score of 0.811 0.0103. I NTRODUCTION Deep learning systems are known to be vulnerable to adversarial attacks at test time. By applying small changes to an input an attacker can cause a machine learning system to mis-classify with a high degree of success. There has been much work on both developing more powerful attacks [1] as well as defences [2]. However, the majority of adversarial machine learning research is focused on the image domain, with consideration of the different challenges that arise within other fields needed [3]. This phenomenon of adversarial examples becomes particularly pertinent when aiming to defend machine learn-Pre-print.
Li, Dan, Chen, Dacheng, Goh, Jonathan, Ng, See-kiong
Today's Cyber-Physical Systems (CPSs) are large, complex, and affixed with networked sensors and actuators that are targets for cyber-attacks. Conventional detection techniques are unable to deal with the increasingly dynamic and complex nature of the CPSs. On the other hand, the networked sensors and actuators generate large amounts of data streams that can be continuously monitored for intrusion events. Unsupervised machine learning techniques can be used to model the system behaviour and classify deviant behaviours as possible attacks. In this work, we proposed a novel Generative Adversarial Networks-based Anomaly Detection (GAN-AD) method for such complex networked CPSs. We used LSTM-RNN in our GAN to capture the distribution of the multivariate time series of the sensors and actuators under normal working conditions of a CPS. Instead of treating each sensor's and actuator's time series independently, we model the time series of multiple sensors and actuators in the CPS concurrently to take into account of potential latent interactions between them. To exploit both the generator and the discriminator of our GAN, we deployed the GAN-trained discriminator together with the residuals between generator-reconstructed data and the actual samples to detect possible anomalies in the complex CPS. We used our GAN-AD to distinguish abnormal attacked situations from normal working conditions for a complex six-stage Secure Water Treatment (SWaT) system. Experimental results showed that the proposed strategy is effective in identifying anomalies caused by various attacks with high detection rate and low false positive rate as compared to existing methods.