Not enough data to create a plot.
Try a different view from the menu above.
--Neural networks are increasingly used in security applications for intrusion detection on industrial control systems. In this work we examine two areas that must be considered for their effective use. Firstly, is their vulnerability to adversarial attacks when used in a time series setting. Secondly, is potential overestimation of performance arising from data leakage artefacts. T o investigate these areas we implement a long short-term memory (LSTM) based intrusion detection system (IDS) which effectively detects cyber-physical attacks on a water treatment testbed representing a strong baseline IDS. The first attacker is able to manipulate sensor readings on a subset of the Secure Water Treatment (SWaT) system. By creating a stream of adversarial data the attacker is able to hide the cyber-physical attacks from the IDS. For the cyber-physical attacks which are detected by the IDS, the attacker required on average 2.48 out of 12 total sensors to be compromised for the cyber-physical attacks to be hidden from the IDS. The second attacker model we explore is an L bounded attacker who can send fake readings to the IDS, but to remain imperceptible, limits their perturbations to the smallest L value needed. Additionally, we examine data leakage problems arising from tuning for F 1 score on the whole SWaT attack set and propose a method to tune detection parameters that does not utilise any attack data. If attack aftereffects are accounted for then our new parameter tuning method achieved an F 1 score of 0.811 0.0103. I NTRODUCTION Deep learning systems are known to be vulnerable to adversarial attacks at test time. By applying small changes to an input an attacker can cause a machine learning system to mis-classify with a high degree of success. There has been much work on both developing more powerful attacks  as well as defences . However, the majority of adversarial machine learning research is focused on the image domain, with consideration of the different challenges that arise within other fields needed . This phenomenon of adversarial examples becomes particularly pertinent when aiming to defend machine learn-Pre-print.
Today's Cyber-Physical Systems (CPSs) are large, complex, and affixed with networked sensors and actuators that are targets for cyber-attacks. Conventional detection techniques are unable to deal with the increasingly dynamic and complex nature of the CPSs. On the other hand, the networked sensors and actuators generate large amounts of data streams that can be continuously monitored for intrusion events. Unsupervised machine learning techniques can be used to model the system behaviour and classify deviant behaviours as possible attacks. In this work, we proposed a novel Generative Adversarial Networks-based Anomaly Detection (GAN-AD) method for such complex networked CPSs. We used LSTM-RNN in our GAN to capture the distribution of the multivariate time series of the sensors and actuators under normal working conditions of a CPS. Instead of treating each sensor's and actuator's time series independently, we model the time series of multiple sensors and actuators in the CPS concurrently to take into account of potential latent interactions between them. To exploit both the generator and the discriminator of our GAN, we deployed the GAN-trained discriminator together with the residuals between generator-reconstructed data and the actual samples to detect possible anomalies in the complex CPS. We used our GAN-AD to distinguish abnormal attacked situations from normal working conditions for a complex six-stage Secure Water Treatment (SWaT) system. Experimental results showed that the proposed strategy is effective in identifying anomalies caused by various attacks with high detection rate and low false positive rate as compared to existing methods.