Goto

Collaborating Authors

Ensemble Learning


Ensemble Methods: A Beginner's Guide

#artificialintelligence

When I started my Data Science journey,few terms like ensemble,boosting often popped up.Whenever I opened the discussion forum of any Kaggle Competition or looked at any winner's solution,it was mostly filled with these things. At first these discussions sounded totally alien,and these class of ensemble models looked like some fancy stuff not meant for the newbies,but trust me once you have a basic understanding behind the concepts you are going to love them! So let's start with a very simple question,What exactly is ensemble? "A group of separate things/people that contribute to a coordinated whole" In a way this is kind of the core idea behind the entire class of ensemble learning! Well let's rewind the clocks a bit and go back to the school days for a while, remember you used to get a report card with an overall grade.Well how exactly was this overall grade calculated,your teachers of respective subjects gave some feedback based on their set of criteria,for example your math teacher would assess you on his own criteria like algebra,trigonometry etc, sports teacher would judge you how you perform on the field,your music teacher would judge on you vocal skills.Point being each of these teachers have their own set of rules of judging the performance of a student and later all of these are combined to give an overall grade on the performance of the student.


Explainable 'AI' using Gradient Boosted randomized networks Pt2 (the Lasso)

#artificialintelligence

This post is about LSBoost, an Explainable'AI' algorithm which uses Gradient Boosted randomized networks for pattern recognition. I've already presented some promising examples of use of LSBoost based on Ridge Regression weak learners. In mlsauce's version 0.7.1, the Lasso can also be used as an alternative ingredient to the weak learners. Here is a comparison of the regression coefficients obtained by using mlsauce's implementation of Ridge regression and the Lasso: The following example is about training set error vs testing set error, as a function of the regularization parameter, both for Ridge regression and Lasso-based weak learners.


Random Forest on GPUs: 2000x Faster than Apache Spark

#artificialintelligence

Disclaimer: I'm a Senior Data Scientist at Saturn Cloud -- we make enterprise data science fast and easy with Python, Dask, and RAPIDS. Check out a video walkthrough here. Random forest is a machine learning algorithm trusted by many data scientists for its robustness, accuracy, and scalability. The algorithm trains many decision trees through bootstrap aggregation, then predictions are made from aggregating the outputs of the trees in the forest. Due to its ensemble nature, random forest is an algorithm that can be implemented in distributed computing settings.


[D] What's the best deep-dive explanation of XGBoost hyperparameters out there?

#artificialintelligence

I'm not a total newbie, so I'd thank for all those "how to get started with xgboost" articles which there are plenty of. I remember having bumped into a site or blog with a great and comprehensive summary of each hyperparameter, but I lost that link and can't find it know from search. As far as I remember, it had a hyperparameter menu on the left, probably referred to all boosting trees and their hyperparameters and was created by some women. Anybody can recall that source?


What is Bootstrap Sampling in Machine Learning and Why is it Important?

#artificialintelligence

The Bootstrap Sampling Method is a very simple concept and is a building block for some of the more advanced machine learning algorithms like AdaBoost and XGBoost. However, when I started my data science journey, I couldn't quite understand the point of it. So my goals are to explain what the bootstrap method is and why it's important to know! Technically speaking, the bootstrap sampling method is a resampling method that uses random sampling with replacement. Don't worry if that sounded confusing, let me explain it with a diagram: Suppose you have an initial sample with 3 observations.


Classification with Random Forests in Python

#artificialintelligence

The random forests algorithm is a machine learning method that can be used for supervised learning tasks such as classification and regression. The algorithm works by constructing a set of decision trees trained on random subsets of features. In the case of classification, the output of a random forest model is the mode of the predicted classes across the decision trees. In this post, we will discuss how to build random forest models for classification tasks in python. In this post, you'll see Classification with Random Forests in Python The random forests algorithm is a machine learning method that can be used for supervised learning tasks such as classification and regression.


A complete explanation of Random Forest Algorithm.

#artificialintelligence

Ensemble learning is a technique where there is a joining of different types of algorithm or same types of algorithm and then it forms a more powerful regression and classification model. Here, in the random forest algorithm, it combines with multiple decision trees and forms a model. Because of its diversity and simplicity, it is one of the most used algorithms. It is used for both classification and regression problems.


How to Develop an AdaBoost Ensemble in Python

#artificialintelligence

Boosting is a class of ensemble machine learning algorithms that involve combining the predictions from many weak learners. A weak learner is a model that is very simple, although has some skill on the dataset. Boosting was a theoretical concept long before a practical algorithm could be developed, and the AdaBoost (adaptive boosting) algorithm was the first successful approach for the idea. The AdaBoost algorithm involves using very short (one-level) decision trees as weak learners that are added sequentially to the ensemble. Each subsequent model attempts to correct the predictions made by the model before it in the sequence.


Boosting and Bagging: How To Develop A Robust Machine Learning Algorithm

#artificialintelligence

Machine learning and data science require more than just throwing data into a python library and utilizing whatever comes out. Data scientists need to actually understand the data and the processes behind the data to be able to implement a successful system. One key methodology to implementation is knowing when a model might benefit from utilizing bootstrapping methods. These are what are called ensemble models. Some examples of ensemble models are AdaBoost and Stochastic Gradient Boosting.


Machine Learning Cheat Sheet (for scikit-learn)

#artificialintelligence

As you hopefully have heard, we at scikit-learn are doing a user survey (which is still open by the way). One of the requests there was to provide some sort of flow chart on how to do machine learning. As this is clearly impossible, I went to work straight away. This is the result: [edit2] clarification: With ensemble classifiers and ensemble regressors I mean random forests, extremely randomized trees, gradient boosted trees, and the soon-to-be-come weight boosted trees (adaboost). More seriously: this is actually my work flow / train of thoughts whenever I try to solve a new problem.