Goto

Collaborating Authors

Results


Self-Supervised Learning for Anomaly Detection in Python: Part 2

#artificialintelligence

Self-supervised learning is one of the most popular fields in modern deep-learning research. As Yann Lecun likes to say self-supervised learning is the dark matter of intelligence and the way to create common sense in AI systems. The ideas and techniques of this paradigm attract many researchers to try and enlarge the application of self-supervised learning into new research fields. Of course, anomaly detection is not an exception. In Part 1 of this article, we discussed the definition of anomaly detection and a technique called Kernel Density Estimation.


Adaptive Memory Networks with Self-supervised Learning for Unsupervised Anomaly Detection

arXiv.org Artificial Intelligence

Unsupervised anomaly detection aims to build models to effectively detect unseen anomalies by only training on the normal data. Although previous reconstruction-based methods have made fruitful progress, their generalization ability is limited due to two critical challenges. First, the training dataset only contains normal patterns, which limits the model generalization ability. Second, the feature representations learned by existing models often lack representativeness which hampers the ability to preserve the diversity of normal patterns. In this paper, we propose a novel approach called Adaptive Memory Network with Self-supervised Learning (AMSL) to address these challenges and enhance the generalization ability in unsupervised anomaly detection. Based on the convolutional autoencoder structure, AMSL incorporates a self-supervised learning module to learn general normal patterns and an adaptive memory fusion module to learn rich feature representations. Experiments on four public multivariate time series datasets demonstrate that AMSL significantly improves the performance compared to other state-of-the-art methods. Specifically, on the largest CAP sleep stage detection dataset with 900 million samples, AMSL outperforms the second-best baseline by \textbf{4}\%+ in both accuracy and F1 score. Apart from the enhanced generalization ability, AMSL is also more robust against input noise.


Deep Verifier Networks: Verification of Deep Discriminative Models with Deep Generative Models

arXiv.org Artificial Intelligence

AI Safety is a major concern in many deep learning applications such as autonomous driving. Given a trained deep learning model, an important natural problem is how to reliably verify the model's prediction. In this paper, we propose a novel framework --- deep verifier networks (DVN) to verify the inputs and outputs of deep discriminative models with deep generative models. Our proposed model is based on conditional variational auto-encoders with disentanglement constraints. We give both intuitive and theoretical justifications of the model. Our verifier network is trained independently with the prediction model, which eliminates the need of retraining the verifier network for a new model. We test the verifier network on out-of-distribution detection and adversarial example detection problems, as well as anomaly detection problems in structured prediction tasks such as image caption generation. We achieve state-of-the-art results in all of these problems.


Semi-Supervised Learning of Bearing Anomaly Detection via Deep Variational Autoencoders

arXiv.org Machine Learning

Most of the data-driven approaches applied to bearing fault diagnosis up to date are established in the supervised learning paradigm, which usually requires a large set of labeled data collected a priori. In practical applications, however, obtaining accurate labels based on real-time bearing conditions can be far more challenging than simply collecting a huge amount of unlabeled data using various sensors. In this paper, we thus propose a semi-supervised learning approach for bearing anomaly detection using variational autoencoder (VAE) based deep generative models, which allows for effective utilization of dataset when only a small subset of data have labels. Finally, a series of experiments is performed using both the Case Western Reserve University (CWRU) bearing dataset and the University of Cincinnati's Center for Intelligent Maintenance Systems (IMS) dataset. The experimental results demonstrate that the proposed semi-supervised learning scheme greatly outperforms two mainstream semi-supervised learning approaches and a baseline supervised convolutional neural network approach, with the overall accuracy improvement ranging between 3% to 30% using different proportions of labeled samples.


AnoNet: Weakly Supervised Anomaly Detection in Textured Surfaces

arXiv.org Machine Learning

Humans can easily detect a defect (anomaly) because it is different or salient when compared to the surface it resides on. Today, manual human visual inspection is still the norm because it is difficult to automate anomaly detection. Neural networks are a useful tool that can teach a machine to find defects. However, they require a lot of training examples to learn what a defect is and it is tedious and expensive to get these samples. We tackle the problem of teaching a network with a low number of training samples with a system we call AnoNet. AnoNet's architecture is similar to CompactCNN with the exceptions that (1) it is a fully convolutional network and does not use strided convolution; (2) it is shallow and compact which minimizes over-fitting by design; (3) the compact design constrains the size of intermediate features which allows training to be done without image downsizing; (4) the model footprint is low making it suitable for edge computation; and (5) the anomaly can be detected and localized despite the weak labelling. AnoNet learns to detect the underlying shape of the anomalies despite the weak annotation as well as preserves the spatial localization of the anomaly. Pre-seeding AnoNet with an engineered filter bank initialization technique reduces the total samples required for training and also achieves state-of-the-art performance. Compared to the CompactCNN, AnoNet achieved a massive 94% reduction of network parameters from 1.13 million to 64 thousand parameters. Experiments were conducted on four data-sets and results were compared against CompactCNN and DeepLabv3. AnoNet improved the performance on an average across all data-sets by 106% to an F1 score of 0.98 and by 13% to an AUROC value of 0.942. AnoNet can learn from a limited number of images. For one of the data-sets, AnoNet learnt to detect anomalies after a single pass through just 53 training images.