Ensemble Learning


What's so special about CatBoost?

#artificialintelligence

CatBoost is based on gradient boosting. A new machine learning technique developed by Yandex that outperforms many existing boosting algorithms like XGBoost, Light GBM. While deep learning algorithms requires lots of data and computational power, boosting algorithms are still in need for most of the business problems. However boosting algorithms like XGBoost takes hours to train and sometimes you'll get frustrated while tuning hyper-parameters. On the other hand, CatBoost is easy to implement and very powerful.


r/MachineLearning - [D] Tips on improving random forest predictive accuracy when # of features is really low?

#artificialintelligence

Normally when I do RF projects I use some sort of feature selection method to choose which features to use. Then I fit the RF model onto those features. Then to test accuracy / related metrics I use cross validation, confusion matrices, etc. However in this case I only have two given features. I don't want to just literally run a RF model on those two features as my whole entire project. I'm thinking gradient boosting is what I should learn?


Diabetes Prediction with Ensemble Techniques

#artificialintelligence

Here, for this data we will build models to predict the "Outcome" i.e. Diabetes Yes or No and we will perform Ensemble techniques to better our predictions. We will be using several techniques to do that, each technique is briefed as we keep building our codes below. Here is the link to this data from where it can be downloaded.


How to Choose among Three Forecasting Models: Machine Learning, Statistical and Expert - Bain & Company

#artificialintelligence

Forecasting methods usually fall into three categories: statistical models, machine learning models and expert forecasts, with the first two being automated and the latter being manual. Statistical methods, including time series models and regression analysis, are considered traditional, while machine learning methods, such as neural network, random forest and the gradient-boosting model, are more modern. Yet when selecting a forecasting method, the "modern vs. traditional" or "automated vs. manual" comparisons can mislead. Preferences will depend on the modeler's training: Those with data science training will prefer machine learning models, while modelers with business backgrounds have more trust in expert forecasts. In fact, each of the three methods has different strengths and can play important roles in forecasting.


Explaining Predictions: Random Forest Post-hoc Analysis (randomForestExplainer package)

#artificialintelligence

We can further evaluate the variable interactions by plotting the probability of a prediction against the variables making up the interaction. However, there is an error when the input supplied is a model created with parsnip. There is no error when the model is created directly from the randomForest package. In this case, we can place it side by side with the ggplot of the distribution of heart disease in the test set.


How We Teach Machines to Learn: Tree Boosting and Adversarial Networks - PROPRIUS

#artificialintelligence

No matter how you prefer to learn, you have probably heard someone mention something about your particular "learning style." This seemingly simple phrase implies that there are many different ways to learn, and people can be better or worse at certain learning methods depending on personal preferences and circumstances. As it is with people, so we tend to make it for computers. Machine learning is moving forward at a fast rate thanks to researchers and programmers figuring out how to optimize the ways through which machines learn new information. Two methods in particular, called tree boosting and adversarial networks, work in tandem to produce encouraging results.


Using Machine Learning to Battle Antibiotic Resistance

#artificialintelligence

The team applied a machine learning algorithm called extreme gradient boosting (XGBoost). Using those 10-mer counts, the computer designs decision trees to predict the right MICs. Each decision point uses one of the 10-mers to help it classify a given genome as resistant or susceptible to various drugs. The algorithm then assigns different levels of importance to each 10-mer, and designs trees repeatedly, in rounds called "boosts," until it gets the lowest error it can for its MIC predictions compared to the true MICs. The researchers ran the algorithm 10 times, each time leaving out a different tenth of their dataset.


VariantSpark, A Random Forest Machine Learning Implementation for Ultra High Dimensional Data

#artificialintelligence

The demands on machine learning methods to cater for ultra high dimensional datasets, datasets with millions of features, have been increasing in domains like life sciences and the Internet of Things (IoT). While Random Forests are suitable for "wide" datasets, current implementations such as Google's PLANET lack the ability to scale to such dimensions. Recent improvements by Yggdrasil begin to address these limitations but do not extend to Random Forest. This paper introduces CursedForest, a novel Random Forest implementation on top of Apache Spark and part of the VariantSpark platform, which parallelises processing of all nodes over the entire forest. CursedForest is 9 and up to 89 times faster than Google's PLANET and Yggdrasil, respectively, and is the first method capable of scaling to millions of features.


Comparing Decision Tree Algorithms: Random Forest vs. XGBoost

#artificialintelligence

This tutorial walks you through a comparison of XGBoost and Random Forest, two popular decision tree algorithms, and helps you identify the best use cases for ensemble techniques like bagging and boosting. By following the tutorial, you'll learn: Understanding the benefits of bagging and boosting--and knowing when to use which technique--will lead to less variance, lower bias, and more stability in your machine learning models.


Tuning XGBoost Hyperparameters with Scikit Optimize

#artificialintelligence

Before we get into code and muddy our hands, let us hold here for a minute and ask ourselves- if we were a computer and had been given the same problem, how would we do it? I'll assume we have only two hyperparameter values in this situation because it makes it easier to visualize. The first thing I am going to do is built a model with any random values for our two hyperparameters and see how my model performed. The next thing I would do is increase one parameter and keep one stationary just to see how my model performance responds to an increase in one of these parameters. If my model performance increases, that means I am moving my hyperparameter in the right direction.