Collaborating Authors

Valiance Improving Predictions with Ensemble Model


"Alone we can do so little and together we can do much" – a phrase from Helen Keller during 50's is a reflection of achievements and successful stories in real life scenarios from decades. Same thing applies to most of the cases from innovation with big impacts and with advanced technologies world. The machine Learning domain is also in the same race to make predictions and classification in a more accurate way using so called ensemble method and it is proved that ensemble modeling offers one of the most convincing way to build highly accurate predictive models. Ensemble methods are learning models that achieve performance by combining the opinions of multiple learners. Typically, an ensemble model is a supervised learning technique for combining multiple weak learners or models to produce a strong learner with the concept of Bagging and Boosting for data sampling.

07 -- Hands On ML -- Ensemble


Ensemble Learning is taking the predictions of multiple models and assume the output to be having the most votes. When you train multiple Decision Trees each on some random sampling of the dataset and for predictions you take predictions of all the trees, the output class would be the class which gets the most votes. This approach is called Random Forest. Voting classifier is when you train the data on multiple classifier such as Logistic Regression, SVM, RF and other classifiers and the majority vote is the predicted output class ie hard classifier. Voting can also be taken as soft by taking argmax of the outputs.

Ensemble Learning: Stacking, Blending & Voting


We have heard the phrase "unity is strength", whose meaning can be transferred to different areas of life. Sometimes correct answers to a specific problem are supported by several sources and not just one. This is what Ensemble Learning tries to do, that is, to put together a group of ML models to improve solutions to specific problems. Throughout this blog, we will learrn what Ensemble Learning is, what are the types of Ensembles that exist and we will specifically address Voting and Stacking Ensembles. Ensemble Learning refers to the use of ML algorithms jointly to solve classification and/or regression problems mainly. These algorithms can be the same type (homogeneous Ensemble Learning) or different types (heterogeneous Ensemble Learning).

Applying an Ensemble Learning Method for Improving Multi-label Classification Performance Machine Learning

In recent years, multi-label classification problem has become a controversial issue. In this kind of classification, each sample is associated with a set of class labels. Ensemble approaches are supervised learning algorithms in which an operator takes a number of learning algorithms, namely base-level algorithms and combines their outcomes to make an estimation. The simplest form of ensemble learning is to train the base-level algorithms on random subsets of data and then let them vote for the most popular classifications or average the predictions of the base-level algorithms. In this study, an ensemble learning method is proposed for improving multi-label classification evaluation criteria. We have compared our method with well-known base-level algorithms on some data sets. Experiment results show the proposed approach outperforms the base well-known classifiers for the multi-label classification problem.

A Comprehensive Guide to Ensemble Learning - What Exactly Do You Need to Know -


Ensemble learning techniques have been proven to yield better performance on machine learning problems. We can use these techniques for regression as well as classification problems. The final prediction from these ensembling techniques is obtained by combining results from several base models. Averaging, voting and stacking are some of the ways the results are combined to obtain a final prediction. In this article, we will explore how ensemble learning can be used to come up with optimal machine learning models. Ensemble learning is a combination of several machine learning models in one problem.