Collaborating Authors


How SASE uses AI


Secure access service edge, or SASE, combines networking and security into a cloud-based service, and it's growing fast. According to Gartner projections, enterprise spending on SASE will hit almost $7 billion this year, up from under $5 billion in 2021. Gartner also predicts that more than 50% of organizations will have strategies to adopt SASE by 2025, up from less than 5% in 2020. The five core components of the SASE stack are SD-WAN, firewall-as-a-service (FWaaS), secure web gateway (SWG), cloud access security broker (CASB), and zero trust network access (ZTNA). "It's something that most, if not all, SASE vendors are working on," says Gartner analyst Joe Skorupa.

Are These Data 'Normal'? Anomalies & Outliers In Machine Learning


Nobody wants outliers in their data -- especially when they have come from the likes of false entries due to fat thumbs. A couple of zeros can throw off an algorithm and can destroy summary statistics. So this is how you use machine learning to remove those pesky outliers. Historically, the first step to anomaly detection is to try and understand what's "normal", and then find examples of "not normal". These "not normal" points are what we would classify as outliers -- they didn't fit our expected distribution even at the furthest ends of it.

101 Machine Learning Algorithms for Data Science with Cheat Sheets


These 101 algorithms are equipped with cheat sheets, tutorials, and explanations. Think of this as the one-stop shop/dictionary/directory for machine learning algorithms. The algorithms have been sorted into 9 groups: Anomaly Detection, Association Rule Learning, Classification, Clustering, Dimensional Reduction, Ensemble, Neural Networks, Regression, Regularization. In this post, you'll find 101 machine learning algorithms with useful Python tutorials, R tutorials, and cheat sheets from Microsoft Azure ML, SAS, and Scikit-Learn to help you know when to use each one (if available). At Data Science Dojo, our mission is to make data science (machine learning in this case) available to everyone.

Calling All Data Scientists: Data Observability Needs You -


We live in a complex world that is full of data, and it's getting even more full every day. In 2020, the world collectively created, captured, copied, and consumed nearly 64.2 zettabytes of data and by 2025 that figure is expected to more than double to 180 zettabytes. Increasingly, companies depend on this data to create great experiences for customers and drive revenue. At the same time, without a way to automate the process of detecting data quality issues, all of this data can quickly get out of hand, eroding trust and hurting the bottom line. Data observability systems have emerged as crucial tools for data-driven companies, helping them leverage huge amounts of data without sacrificing quality and reliability.

KENN: Enhancing Deep Neural Networks by Leveraging Knowledge for Time Series Forecasting Artificial Intelligence

End-to-end data-driven machine learning methods often have exuberant requirements in terms of quality and quantity of training data which are often impractical to fulfill in real-world applications. This is specifically true in time series domain where problems like disaster prediction, anomaly detection, and demand prediction often do not have a large amount of historical data. Moreover, relying purely on past examples for training can be sub-optimal since in doing so we ignore one very important domain i.e knowledge, which has its own distinct advantages. In this paper, we propose a novel knowledge fusion architecture, Knowledge Enhanced Neural Network (KENN), for time series forecasting that specifically aims towards combining strengths of both knowledge and data domains while mitigating their individual weaknesses. We show that KENN not only reduces data dependency of the overall framework but also improves performance by producing predictions that are better than the ones produced by purely knowledge and data driven domains. We also compare KENN with state-of-the-art forecasting methods and show that predictions produced by KENN are significantly better even when trained on only 50\% of the data.

Trustworthy Anomaly Detection: A Survey Artificial Intelligence

Anomaly detection has a wide range of real-world applications, such as bank fraud detection and cyber intrusion detection. In the past decade, a variety of anomaly detection models have been developed, which lead to big progress towards accurately detecting various anomalies. Despite the successes, anomaly detection models still face many limitations. The most significant one is whether we can trust the detection results from the models. In recent years, the research community has spent a great effort to design trustworthy machine learning models, such as developing trustworthy classification models. However, the attention to anomaly detection tasks is far from sufficient. Considering that many anomaly detection tasks are life-changing tasks involving human beings, labeling someone as anomalies or fraudsters should be extremely cautious. Hence, ensuring the anomaly detection models conducted in a trustworthy fashion is an essential requirement to deploy the models to conduct automatic decisions in the real world. In this brief survey, we summarize the existing efforts and discuss open problems towards trustworthy anomaly detection from the perspectives of interpretability, fairness, robustness, and privacy-preservation.

Meta-learning with GANs for anomaly detection, with deployment in high-speed rail inspection system Artificial Intelligence

Anomaly detection has been an active research area with a wide range of potential applications. Key challenges for anomaly detection in the AI era with big data include lack of prior knowledge of potential anomaly types, highly complex and noisy background in input data, scarce abnormal samples, and imbalanced training dataset. In this work, we propose a meta-learning framework for anomaly detection to deal with these issues. Within this framework, we incorporate the idea of generative adversarial networks (GANs) with appropriate choices of loss functions including structural similarity index measure (SSIM). Experiments with limited labeled data for high-speed rail inspection demonstrate that our meta-learning framework is sharp and robust in identifying anomalies. Our framework has been deployed in five high-speed railways of China since 2021: it has reduced more than 99.7% workload and saved 96.7% inspection time.

Two-Stage Deep Anomaly Detection with Heterogeneous Time Series Data Artificial Intelligence

We introduce a data-driven anomaly detection framework using a manufacturing dataset collected from a factory assembly line. Given heterogeneous time series data consisting of operation cycle signals and sensor signals, we aim at discovering abnormal events. Motivated by our empirical findings that conventional single-stage benchmark approaches may not exhibit satisfactory performance under our challenging circumstances, we propose a two-stage deep anomaly detection (TDAD) framework in which two different unsupervised learning models are adopted depending on types of signals. In Stage I, we select anomaly candidates by using a model trained by operation cycle signals; in Stage II, we finally detect abnormal events out of the candidates by using another model, which is suitable for taking advantage of temporal continuity, trained by sensor signals. A distinguishable feature of our framework is that operation cycle signals are exploited first to find likely anomalous points, whereas sensor signals are leveraged to filter out unlikely anomalous points afterward. Our experiments comprehensively demonstrate the superiority over single-stage benchmark approaches, the model-agnostic property, and the robustness to difficult situations.

ARIBA: Towards Accurate and Robust Identification of Backdoor Attacks in Federated Learning Artificial Intelligence

The distributed nature and privacy-preserving characteristics of federated learning make it prone to the threat of poisoning attacks, especially backdoor attacks, where the adversary implants backdoors to misguide the model on certain attacker-chosen sub-tasks. In this paper, we present a novel method ARIBA to accurately and robustly identify backdoor attacks in federated learning. By empirical study, we observe that backdoor attacks are discernible by the filters of CNN layers. Based on this finding, we employ unsupervised anomaly detection to evaluate the pre-processed filters and calculate an anomaly score for each client. We then identify the most suspicious clients according to their anomaly scores. Extensive experiments are conducted, which show that our method ARIBA can effectively and robustly defend against multiple state-of-the-art attacks without degrading model performance.


AAAI Conferences

Automated analysis methods are crucial aids for monitoring and defending a network to protect the sensitive or confidential data it hosts. This work introduces a flexible, powerful, and unsupervised approach to detecting anomalous behavior in computer and network logs; one that largely eliminates domain-dependent feature engineering employed by existing methods. By treating system logs as threads of interleaved sentences'' (event log lines) to train online unsupervised neural network language models, our approach provides an adaptive model of normal network behavior. We compare the effectiveness of both standard and bidirectional recurrent neural network language models at detecting malicious activity within network log data. Extending these models, we introduce a tiered recurrent architecture, which provides context by modeling sequences of users' actions over time. Compared to Isolation Forest and Principal Components Analysis, two popular anomaly detection algorithms, we observe superior performance on the Los Alamos National Laboratory Cyber Security dataset. For log-line-level red team detection, our best performing character-based model provides test set area under the receiver operator characteristic curve of 0.98, demonstrating the strong fine-grained anomaly detection performance of this approach on open vocabulary logging sources.