Anomaly Detection


Andrew Ng's Machine Learning Course in Python (Anomaly Detection)

#artificialintelligence

In this part of the assignment, we will implement an anomaly detection algorithm using the Gaussian model to detect anomalous behavior in a 2D dataset first and then a high-dimensional dataset. Multivariate Gaussian Distribution is an optional lecture in the course and the code to compute the probability density is given to us. However, in order for me to proceed on with the assignment, I need to write the multivariateGaussian function from scratch. Some of the interesting functions we had utilized here are from numpy linear algebra class. The official documentation can be found here.


Management AI: Anomaly Detection And Machine Learning

#artificialintelligence

When a person drives, there are many things that are quickly noticed and then ignored. What gains attention are those things that might be a danger. A pedestrian who might walk out into the road, a light turning yellow, an adjacent car drifting into the same lane, all of those need special attention. The same thing is true in the world of business computing. For instance, a sudden increase in sales is great, but the company needs to track that anomalous increase back to its cause in order to identify and replicate the reason.


How to use machine learning for anomaly detection and condition monitoring

#artificialintelligence

In this article, I will introduce a couple of different techniques and applications of machine learning and statistical analysis, and then show how to apply these approaches to solve a specific use case for anomaly detection and condition monitoring. These are all terms you have probably heard or read about before. However, behind all of these buzz words, the main goal is the use of technology and data to increase productivity and efficiency. The connectivity and flow of information and data between devices and sensors allows for an abundance of available data. The key enabler is then being able to use these vast amounts of available data and actually extract useful information, making it possible to reduce costs, optimize capacity, and keep downtime to a minimum.


Anomaly Detection with Isolation Forests using H2O - Open Source Leader in AI and ML

#artificialintelligence

Anomaly detection is a common data science problem where the goal is to identify odd or suspicious observations, events, or items in our data that might be indicative of some issues in our data collection process (such as broken sensors, typos in collected forms, etc.) or unexpected events like security breaches, server failures, and so on. Anomaly detection can be performed in a supervised, semi-supervised, and unsupervised manner. For a supervised approach, we need to know whether each observation, event or item is anomalous or genuine, and we use this information during training. Obtaining labels for each observation might often be unrealistic. A semi-supervised approach uses the assumption that we only know which observations are genuine, non-anomalous, and we do not have any information on the anomalous observations.


Anomaly detection: Machine learning for real-time decision making

#artificialintelligence

Ever since the rise of big data enterprises of all sizes have been in a state of uncertainty. Today we have more data available than ever before, but few have been able to implement the procedures to turn this data into insights. To the human eye, there is just too much data to process. Tim Keary looks at anomaly detection in this first of a series of articles. Unmanageable datasets have become a problem as organizations are needing to make faster decision in real-time.


GLAD: GLocalized Anomaly Detection via Active Feature Space Suppression

arXiv.org Machine Learning

We propose an algorithm called GLAD (GLocalized Anomaly Detection) that allows end-users to retain the use of simple and understandable global anomaly detectors by automatically learning their local relevance to specific data instances using label feedback. The key idea is to place a uniform prior on the relevance of each member of the anomaly detection ensemble over the input feature space via a neural network trained on unlabeled instances, and tune the weights of the neural network to adjust the local relevance of each ensemble member using all labeled instances. Our experiments on synthetic and real-world data show the effectiveness of GLAD in learning the local relevance of ensemble members and discovering anomalies via label feedback.


Spatially-weighted Anomaly Detection

arXiv.org Artificial Intelligence

Many types of anomaly detection methods have been proposed recently, and applied to a wide variety of fields including medical screening and production quality checking. Some methods have utilized images, and, in some cases, a part of the anomaly images is known beforehand. However, this kind of information is dismissed by previous methods, because the methods can only utilize a normal pattern. Moreover, the previous methods suffer a decrease in accuracy due to negative effects from surrounding noises. In this study, we propose a spatially-weighted anomaly detection method (SPADE) that utilizes all of the known patterns and lessens the vulnerability to ambient noises by applying Grad-CAM, which is the visualization method of a CNN. We evaluated our method quantitatively using two datasets, the MNIST dataset with noise and a dataset based on a brief screening test for dementia.


Active Anomaly Detection via Ensembles

arXiv.org Machine Learning

In critical applications of anomaly detection including computer security and fraud prevention, the anomaly detector must be configurable by the analyst to minimize the effort on false positives. One important way to configure the anomaly detector is by providing true labels for a few instances. We study the problem of label-efficient active learning to automatically tune anomaly detection ensembles and make four main contributions. First, we present an important insight into how anomaly detector ensembles are naturally suited for active learning. This insight allows us to relate the greedy querying strategy to uncertainty sampling, with implications for label-efficiency. Second, we present a novel formalism called compact description to describe the discovered anomalies and show that it can also be employed to improve the diversity of the instances presented to the analyst without loss in the anomaly discovery rate. Third, we present a novel data drift detection algorithm that not only detects the drift robustly, but also allows us to take corrective actions to adapt the detector in a principled manner. Fourth, we present extensive experiments to evaluate our insights and algorithms in both batch and streaming settings. Our results show that in addition to discovering significantly more anomalies than state-of-the-art unsupervised baselines, our active learning algorithms under the streaming-data setup are competitive with the batch setup.


Coupled IGMM-GANs for deep multimodal anomaly detection in human mobility data

arXiv.org Machine Learning

Detecting anomalous activity in human mobility data has a number of applications including road hazard sensing, telematic based insurance, and fraud detection in taxi services and ride sharing. In this paper we address two challenges that arise in the study of anomalous human trajectories: 1) a lack of ground truth data on what defines an anomaly and 2) the dependence of existing methods on significant pre-processing and feature engineering. While generative adversarial networks seem like a natural fit for addressing these challenges, we find that existing GAN based anomaly detection algorithms perform poorly due to their inability to handle multimodal patterns. For this purpose we introduce an infinite Gaussian mixture model coupled with (bi-directional) generative adversarial networks, IGMM-GAN, that is able to generate synthetic, yet realistic, human mobility data and simultaneously facilitates multimodal anomaly detection. Through estimation of a generative probability density on the space of human trajectories, we are able to generate realistic synthetic datasets that can be used to benchmark existing anomaly detection methods. The estimated multimodal density also allows for a natural definition of outlier that we use for detecting anomalous trajectories. We illustrate our methodology and its improvement over existing GAN anomaly detection on several human mobility datasets, along with MNIST.


Anomaly Detection in the Presence of Missing Values

arXiv.org Machine Learning

Standard methods for anomaly detection assume that all features are observed at both learning time and prediction time. Such methods cannot process data containing missing values. This paper studies five strategies for handling missing values in test queries: (a) mean imputation, (b) MAP imputation, (c) reduction (reduced-dimension anomaly detectors via feature bagging), (d) marginalization (for density estimators only), and (e) proportional distribution (for tree-based methods only). Our analysis suggests that MAP imputation and proportional distribution should give better results than mean imputation, reduction, and marginalization. These hypotheses are largely confirmed by experimental studies on synthetic data and on anomaly detection benchmark data sets using the Isolation Forest (IF), LODA, and EGMM anomaly detection algorithms. However, marginalization worked surprisingly well for EGMM, and there are exceptions where reduction works well on some benchmark problems. We recommend proportional distribution for IF, MAP imputation for LODA, and marginalization for EGMM.