Ensemble Learning: Overviews


A Primer to Ensemble Learning – Bagging and Boosting

#artificialintelligence

Ensemble is a machine learning concept in which multiple models are trained using the same learning algorithm. Bagging is a way to decrease the variance in the prediction by generating additional data for training from dataset using combinations with repetitions to produce multi-sets of the original data. Boosting is an iterative technique which adjusts the weight of an observation based on the last classification. If an observation was classified incorrectly, it tries to increase the weight of this observation. Boosting in general builds strong predictive models.


An Evaluation of Classification and Outlier Detection Algorithms

arXiv.org Machine Learning

This paper evaluates algorithms for classification and outlier detection accuracies in temporal data. We focus on algorithms that train and classify rapidly and can be used for systems that need to incorporate new data regularly. Hence, we compare the accuracy of six fast algorithms using a range of well-known time-series datasets. The analyses demonstrate that the choice of algorithm is task and data specific but that we can derive heuristics for choosing. Gradient Boosting Machines are generally best for classification but there is no single winner for outlier detection though Gradient Boosting Machines (again) and Random Forest are better. Hence, we recommend running evaluations of a number of algorithms using our heuristics.


KNIME Analytics: a Review

#artificialintelligence

This video shows a general review of the analytics capabilities of the KNIME Analytics Platform. Here we only mention: Random Forest, Deep Learning, Gradient Boosted Trees, Bagging and Boosting for ensemble methods, Decision Trees, Neural Networks, Logistic Regression, how to build your own ensemble model, and external integrations as Weka, H2O, R, and Python. This is what we show here, which for time reasons, is of course incomplete. Download and install KNIME Analytics Platform (https://www.knime.com/downloads) to explore the constantly growing set of machine learning and statistics algorithms available to analyze your data.


Extreme Gradient Boosting and Behavioral Biometrics

AAAI Conferences

As insider hacks become more prevalent it is becoming more useful to identify valid users from the inside of a system rather than from the usual external entry points where exploits are used to gain entry. One of the main goals of this study was to ascertain how well Gradient Boosting could be used for prediction or, in this case, classification or identification of a specific user through the learning of HCI-based behavioral biometrics. If applicable, this procedure could be used to verify users after they have gained entry into a protected system using data that is as human-centric as other biometrics, but less invasive. For this study an Extreme Gradient Boosting algorithm was used for training and testing on a dataset containing keystroke dynamics information. This specific algorithm was chosen because the majority of current research utilizes mainstream methods such as KNN and SVM and the hypothesis of this study was centered on the potential applicability of ensemble related decision or model trees. The final predictive model produced an accuracy of 0.941 with a Kappa value of 0.942 demonstrating that HCI-based behavioral biometrics in the form of keystroke dynamics can be used to identify the users of a system.