Goto

Collaborating Authors

 Jang-Jaccard, Julian


Classification and Explanation of Distributed Denial-of-Service (DDoS) Attack Detection using Machine Learning and Shapley Additive Explanation (SHAP) Methods

arXiv.org Artificial Intelligence

DDoS attacks involve overwhelming a target system with a large number of requests or traffic from multiple sources, disrupting the normal traffic of a targeted server, service, or network. Distinguishing between legitimate traffic and malicious traffic is a challenging task. It is possible to classify legitimate traffic and malicious traffic and analysis the network traffic by using machine learning and deep learning techniques. However, an inter-model explanation implemented to classify a traffic flow whether is benign or malicious is an important investigation of the inner working theory of the model to increase the trustworthiness of the model. Explainable Artificial Intelligence (XAI) can explain the decision-making of the machine learning models that can be classified and identify DDoS traffic. In this context, we proposed a framework that can not only classify legitimate traffic and malicious traffic of DDoS attacks but also use SHAP to explain the decision-making of the classifier model. To address this concern, we first adopt feature selection techniques to select the top 20 important features based on feature importance techniques (e.g., XGB-based SHAP feature importance). Following that, the Multi-layer Perceptron Network (MLP) part of our proposed model uses the optimized features of the DDoS attack dataset as inputs to classify legitimate and malicious traffic. We perform extensive experiments with all features and selected features. The evaluation results show that the model performance with selected features achieves above 99\% accuracy. Finally, to provide interpretability, XAI can be adopted to explain the model performance between the prediction results and features based on global and local explanations by SHAP, which can better explain the results achieved by our proposed framework.


Reconstruction-based LSTM-Autoencoder for Anomaly-based DDoS Attack Detection over Multivariate Time-Series Data

arXiv.org Artificial Intelligence

A Distributed Denial-of-service (DDoS) attack is a malicious attempt to disrupt the regular traffic of a targeted server, service, or network by sending a flood of traffic to overwhelm the target or its surrounding infrastructure. As technology improves, new attacks have been developed by hackers. Traditional statistical and shallow machine learning techniques can detect superficial anomalies based on shallow data and feature selection, however, these approaches cannot detect unseen DDoS attacks. In this context, we propose a reconstruction-based anomaly detection model named LSTM-Autoencoder (LSTM-AE) which combines two deep learning-based models for detecting DDoS attack anomalies. The proposed structure of long short-term memory (LSTM) networks provides units that work with each other to learn the long short-term correlation of data within a time series sequence. Autoencoders are used to identify the optimal threshold based on the reconstruction error rates evaluated on each sample across all time-series sequences. As such, a combination model LSTM-AE can not only learn delicate sub-pattern differences in attacks and benign traffic flows, but also minimize reconstructed benign traffic to obtain a lower range reconstruction error, with attacks presenting a larger reconstruction error. In this research, we trained and evaluated our proposed LSTM-AE model on reflection-based DDoS attacks (DNS, LDAP, and SNMP). The results of our experiments demonstrate that our method performs better than other state-of-the-art methods, especially for LDAP attacks, with an accuracy of over 99.


Generative Adversarial Networks for Malware Detection: a Survey

arXiv.org Artificial Intelligence

Since their proposal in the 2014 paper by Ian Goodfellow, there has been an explosion of research into the area of Generative Adversarial Networks. While they have been utilised in many fields, the realm of malware research is a problem space in which GANs have taken root. From balancing datasets to creating unseen examples in rare classes, GAN models offer extensive opportunities for application. This paper surveys the current research and literature for the use of Generative Adversarial Networks in the malware problem space. This is done with the hope that the reader may be able to gain an overall understanding as to what the Generative Adversarial model provides for this field, and for what areas within malware research it is best utilised. It covers the current related surveys, the different categories of GAN, and gives the outcomes of recent research into optimising GANs for different topics, as well as future directions for exploration.


IGRF-RFE: A Hybrid Feature Selection Method for MLP-based Network Intrusion Detection on UNSW-NB15 Dataset

arXiv.org Artificial Intelligence

The effectiveness of machine learning models is significantly affected by the size of the dataset and the quality of features as redundant and irrelevant features can radically degrade the performance. This paper proposes IGRF-RFE: a hybrid feature selection method tasked for multi-class network anomalies using a Multilayer perceptron (MLP) network. IGRF-RFE can be considered as a feature reduction technique based on both the filter feature selection method and the wrapper feature selection method. In our proposed method, we use the filter feature selection method, which is the combination of Information Gain and Random Forest Importance, to reduce the feature subset search space. Then, we apply recursive feature elimination(RFE) as a wrapper feature selection method to further eliminate redundant features recursively on the reduced feature subsets. Our experimental results obtained based on the UNSW-NB15 dataset confirm that our proposed method can improve the accuracy of anomaly detection while reducing the feature dimension. The results show that the feature dimension is reduced from 42 to 23 while the multi-classification accuracy of MLP is improved from 82.25% to 84.24%.


A Game-Theoretic Approach for AI-based Botnet Attack Defence

arXiv.org Artificial Intelligence

A strong cyber defense system should be able to detect, monitor, and promptly leverage defence mechanisms to the cyber threats including evolving and intelligent attacks Hou et al. [2020], Brundage et al. [2018], Jang-Jaccard and Nepal [2014], Camp et al. [2019]. However, traditional defensive techniques cannot avoid the novel and evolving attacks which can leverage AI technology to plan and launch various attacks. AI-powered attacks can be categorized based on AI-aided and AI-embedded attacks. AI-aided attacks are those that leverage AI to launch the attacks effectively. In this type, the intelligent attackers use AI techniques. However, in AI-embedded attacks, the threats are weaponized by AI themselves such as Deep locker Stoecklin [2018] while in the AI-aided attacks, the attackers could launch various AI-based techniques to detect and recognize the target network, vulnerabilities, and valuable targets Kaloudi and Li [2020]. In fact, they utilize various AI techniques as a tool for various purposes. In Kaloudi and Li [2020], the authors investigated the AI-powered cyber attacks and mapped them onto a proposed framework with new threats including the classification of several aspects of threats that use AI during the cyber-attack life cycle.


Deep Q-Learning based Reinforcement Learning Approach for Network Intrusion Detection

arXiv.org Artificial Intelligence

The rise of the new generation of cyber threats demands more sophisticated and intelligent cyber defense solutions equipped with autonomous agents capable of learning to make decisions without the knowledge of human experts. Several reinforcement learning methods (e.g., Markov) for automated network intrusion tasks have been proposed in recent years. In this paper, we introduce a new generation of network intrusion detection methods that combines a Q-learning-based reinforcement learning with a deep-feed forward neural network method for network intrusion detection. Our proposed Deep Q-Learning (DQL) model provides an ongoing auto-learning capability for a network environment that can detect different types of network intrusions using an automated trial-error approach and continuously enhance its detection capabilities. We provide the details of fine-tuning different hyperparameters involved in the DQL model for more effective self-learning. According to our extensive experimental results based on the NSL-KDD dataset, we confirm that the lower discount factor which is set as 0.001 under 250 episodes of training yields the best performance results. Our experimental results also show that our proposed DQL is highly effective in detecting different intrusion classes and outperforms other similar machine learning approaches.


Task-Aware Meta Learning-based Siamese Neural Network for Classifying Obfuscated Malware

arXiv.org Artificial Intelligence

Malware authors apply different obfuscation techniques on the generic feature of malware (i.e., unique malware signature) to create new variants to avoid detection. Existing Siamese Neural Network (SNN) based malware detection methods fail to correctly classify different malware families when similar generic features are shared across multiple malware variants resulting in high false-positive rates. To address this issue, we propose a novel Task-Aware Meta Learning-based Siamese Neural Network resilient against obfuscated malware while able to detect malware trained with one or a few training samples. Using entropy features of each malware signature alongside image features as task inputs, our task-aware meta leaner generates the parameters for the feature layers to more accurately adjust the feature embedding for different malware families. In addition, our model utilizes meta-learning with the extracted features of a pre-trained network (e.g., VGG-16) to avoid the bias typically associated with a model trained with a limited number of training samples. Our proposed approach is highly effective in recognizing unique malware signatures, thus correctly classifying malware samples that belong to the same malware family even in the presence of obfuscation technique applied to malware. Our experimental results, validated with N-way on N-shot learning, show that our model is highly effective in classification accuracy exceeding the rate>91% compared to other similar methods.


MSD-Kmeans: A Novel Algorithm for Efficient Detection of Global and Local Outliers

arXiv.org Machine Learning

Outlier detection is a technique in data mining that aims to detect unusual or unexpected records in the dataset. Existing outlier detection algorithms have different pros and cons and exhibit different sensitivity to noisy data such as extreme values. In this paper, we propose a novel cluster-based outlier detection algorithm named MSD-Kmeans that combines the statistical method of Mean and Standard Deviation (MSD) and the machine learning clustering algorithm K-means to detect outliers more accurately with the better control of extreme values. There are two phases in this combination method of MSD-Kmeans: (1) applying MSD algorithm to eliminate as many noisy data to minimize the interference on clusters, and (2) applying K-means algorithm to obtain local optimal clusters. We evaluate our algorithm and demonstrate its effectiveness in the context of detecting possible overcharging of taxi fares, as greedy dishonest drivers may attempt to charge high fares by detouring. We compare the performance indicators of MSD-Kmeans with those of other outlier detection algorithms, such as MSD, K-means, Z-score, MIQR and LOF, and prove that the proposed MSD-Kmeans algorithm achieves the highest measure of precision, accuracy, and F-measure. We conclude that MSD-Kmeans can be used for effective and efficient outlier detection on data of varying quality on IoT devices.