Collaborating Authors

feature selection

Feature Selection for Learning to Predict Outcomes of Compute Cluster Jobs with Application to Decision Support Artificial Intelligence

We present a machine learning framework and a new test bed for data mining from the Slurm Workload Manager for high-performance computing (HPC) clusters. The focus was to find a method for selecting features to support decisions: helping users decide whether to resubmit failed jobs with boosted CPU and memory allocations or migrate them to a computing cloud. This task was cast as both supervised classification and regression learning, specifically, sequential problem solving suitable for reinforcement learning. Selecting relevant features can improve training accuracy, reduce training time, and produce a more comprehensible model, with an intelligent system that can explain predictions and inferences. We present a supervised learning model trained on a Simple Linux Utility for Resource Management (Slurm) data set of HPC jobs using three different techniques for selecting features: linear regression, lasso, and ridge regression. Our data set represented both HPC jobs that failed and those that succeeded, so our model was reliable, less likely to overfit, and generalizable. Our model achieved an R^2 of 95\% with 99\% accuracy. We identified five predictors for both CPU and memory properties.

Data Science 2020 : Complete Data Science & Machine Learning


Data Science and Machine Learning are the hottest skills in demand but challenging to learn. Did you wish that there was one course for Data Science and Machine Learning that covers everything from Math for Machine Learning, Advance Statistics for Data Science, Data Processing, Machine Learning A-Z, Deep learning and more? Well, you have come to the right place. This Data Science and Machine Learning course has 11 projects, 250 lectures, more than 25 hours of content, one Kaggle competition project with top 1 percentile score, code templates and various quizzes. Today Data Science and Machine Learning is used in almost all the industries, including automobile, banking, healthcare, media, telecom and others.

Active Feature Selection for the Mutual Information Criterion Machine Learning

We study active feature selection, a novel feature selection setting in which unlabeled data is available, but the budget for labels is limited, and the examples to label can be actively selected by the algorithm. We focus on feature selection using the classical mutual information criterion, which selects the $k$ features with the largest mutual information with the label. In the active feature selection setting, the goal is to use significantly fewer labels than the data set size and still find $k$ features whose mutual information with the label based on the \emph{entire} data set is large. We explain and experimentally study the choices that we make in the algorithm, and show that they lead to a successful algorithm, compared to other more naive approaches. Our design draws on insights which relate the problem of active feature selection to the study of pure-exploration multi-armed bandits settings. While we focus here on mutual information, our general methodology can be adapted to other feature-quality measures as well. The code is available at the following url:

Quick and Robust Feature Selection: the Strength of Energy-efficient Sparse Training for Autoencoders Machine Learning

Major complications arise from the recent increase in the amount of high-dimensional data, including high computational costs and memory requirements. Feature selection, which identifies the most relevant and informative attributes of a dataset, has been introduced as a solution to this problem. Most of the existing feature selection methods are computationally inefficient; inefficient algorithms lead to high energy consumption, which is not desirable for devices with limited computational and energy resources. In this paper, a novel and flexible method for unsupervised feature selection is proposed. This method, named QuickSelection, introduces the strength of the neuron in sparse neural networks as a criterion to measure the feature importance. This criterion, blended with sparsely connected denoising autoencoders trained with the sparse evolutionary training procedure, derives the importance of all input features simultaneously. We implement QuickSelection in a purely sparse manner as opposed to the typical approach of using a binary mask over connections to simulate sparsity. It results in a considerable speed increase and memory reduction. When tested on several benchmark datasets, including five low-dimensional and three high-dimensional datasets, the proposed method is able to achieve the best trade-off of classification and clustering accuracy, running time, and maximum memory usage, among widely used approaches for feature selection. Besides, our proposed method requires the least amount of energy among the state-of-the-art autoencoder-based feature selection methods.

Malware Detection using Artificial Bee Colony Algorithm Artificial Intelligence

Malware detection has become a challenging task due to the increase in the number of malware families. Universal malware detection algorithms that can detect all the malware families are needed to make the whole process feasible. However, the more universal an algorithm is, the higher number of feature dimensions it needs to work with, and that inevitably causes the emerging problem of Curse of Dimensionality (CoD). Besides, it is also difficult to make this solution work due to the real-time behavior of malware analysis. In this paper, we address this problem and aim to propose a feature selection based malware detection algorithm using an evolutionary algorithm that is referred to as Artificial Bee Colony (ABC). The proposed algorithm enables researchers to decrease the feature dimension and as a result, boost the process of malware detection. The experimental results reveal that the proposed method outperforms the state-of-the-art.

Unfolding the Maths behind Ridge and Lasso Regression!


This article was published as a part of the Data Science Blogathon. Many times we have come across this statement – Lasso regression causes sparsity while Ridge regression doesn't! But I'm pretty sure that most of us might not have understood how exactly this works. Let's try to understand this using calculus. First, let's understand what sparsity is.

Using the Chi-Squared test for feature selection with implementation


Let's approach this problem of feature selection using Chi-Square a question and answer style. If you are a video guy, you may check out our youtube lecture on the same. Question 1: What is a feature? For any ML or DL problem, the data is arranged in rows and columns. Let's take the example of a titanic shipwreck problem. Question 2: What are the different types of features?

Learning causal representations for robust domain adaptation Machine Learning

Domain adaptation solves the learning problem in a target domain by leveraging the knowledge in a relevant source domain. While remarkable advances have been made, almost all existing domain adaptation methods heavily require large amounts of unlabeled target domain data for learning domain invariant representations to achieve good generalizability on the target domain. In fact, in many real-world applications, target domain data may not always be available. In this paper, we study the cases where at the training phase the target domain data is unavailable and only well-labeled source domain data is available, called robust domain adaptation. To tackle this problem, under the assumption that causal relationships between features and the class variable are robust across domains, we propose a novel Causal AutoEncoder (CAE), which integrates deep autoencoder and causal structure learning into a unified model to learn causal representations only using data from a single source domain. Specifically, a deep autoencoder model is adopted to learn low-dimensional representations, and a causal structure learning model is designed to separate the low-dimensional representations into two groups: causal representations and task-irrelevant representations. Using three real-world datasets the extensive experiments have validated the effectiveness of CAE compared to eleven state-of-the-art methods.

Dependency-based Anomaly Detection: Framework, Methods and Benchmark Artificial Intelligence

Anomaly detection is an important research problem because anomalies often contain critical insights for understanding the unusual behavior in data. One type of anomaly detection approach is dependency-based, which identifies anomalies by examining the violations of the normal dependency among variables. These methods can discover subtle and meaningful anomalies with better interpretation. Existing dependency-based methods adopt different implementations and show different strengths and weaknesses. However, the theoretical fundamentals and the general process behind them have not been well studied. This paper proposes a general framework, DepAD, to provide a unified process for dependency-based anomaly detection. DepAD decomposes unsupervised anomaly detection tasks into feature selection and prediction problems. Utilizing off-the-shelf techniques, the DepAD framework can have various instantiations to suit different application domains. Comprehensive experiments have been conducted over one hundred instantiated DepAD methods with 32 real-world datasets to evaluate the performance of representative techniques in DepAD. To show the effectiveness of DepAD, we compare two DepAD methods with nine state-of-the-art anomaly detection methods, and the results show that DepAD methods outperform comparison methods in most cases. Through the DepAD framework, this paper gives guidance and inspiration for future research of dependency-based anomaly detection and provides a benchmark for its evaluation.

Differentiable Unsupervised Feature Selection based on a Gated Laplacian Machine Learning

Scientific observations may consist of a large number of variables (features). Identifying a subset of meaningful features is often ignored in unsupervised learning, despite its potential for unraveling clear patterns hidden in the ambient space. In this paper, we present a method for unsupervised feature selection, and we demonstrate its use for the task of clustering. We propose a differentiable loss function that combines the Laplacian score, which favors low-frequency features, with a gating mechanism for feature selection. We improve the Laplacian score, by replacing it with a gated variant computed on a subset of features. This subset is obtained using a continuous approximation of Bernoulli variables whose parameters are trained to gate the full feature space. We mathematically motivate the proposed approach and demonstrate that in the high noise regime, it is crucial to compute the Laplacian on the gated inputs, rather than on the full feature set. Experimental demonstration of the efficacy of the proposed approach and its advantage over current baselines is provided using several real-world examples.