Goto

Collaborating Authors

Ensemble Learning


BetaBoosting

#artificialintelligence

At this point, we all know of XGBoost due to the massive success it has had in numerous Data Science competitions held on platforms like Kaggle. Along with its success, we have seen several variations such as CatBoost and LightGBM. All of these implementations are based on the Gradient Boosting algorithm developed by Friedman¹, which involves iteratively building an ensemble of weak learners (usually decision trees) where each subsequent learner is trained on the previous learner's errors. Let's take a look at some general pseudo-code for the algorithm from Elements of Statistical Learning²: However, this is not complete! A core mechanism which allows boosting to work is a shrinkage parameter that penalizes each learner at each boosting round that is commonly called the'learning rate'.


How and why to build your own gradient boosted-tree package

#artificialintelligence

In order to make accurate and fast travel-time predictions, Lyft built a gradient boosted tree (GBT) package from the ground up. It is slower to train than off-the-shelf packages, but can be customized to treat space and time more efficiently and yield less volatile predictions. Machine learning runs at the core of what we do at Lyft. Examples include predicting travel time between two locations, modeling the probability of a ride being canceled, forecasting supply and demand, and many more. These models enable us to match riders and drivers more efficiently, incentivize drivers to be where they can get more rides, and improve the ride experience.


Machine Learning in Python with 5 Machine Learning Projects

#artificialintelligence

This course is a perfect fit for you. This course will take you step by step into the world of Machine Learning. Machine Learning is the study of computer algorithms that automates analytical model building. It is a branch of Artificial Intelligence based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention. Machine Learning is actively being used today, perhaps in many more places than one world expects.


What's in a "Random Forest"? Predicting Diabetes

#artificialintelligence

If you've heard of "random forests" as a hot, sexy machine learning algorithm and you want to implement it, great! But if you're not sure exactly what happens in a random forest, or how random forests make their classification decisions, then read on:) We'll find that we can break down random forests into smaller, more digestible pieces. As a forest is made of trees, so a random forest is made of a bunch of randomly sampled sub-components called decision trees. So first let's try to understand what a decision tree is, and how it comes to its prediction. For now, we'll just look at classification decision trees.


XGBoost -- The Undisputed GOAT!

#artificialintelligence

In this article, we'll learn about XGBoost, its background, its widely accepted usage in competitions such as Kaggle's and help you build an intuitive understanding of it by diving into the foundation of this algorithm. XGBoost is an algorithm that is highly flexible, portable, and efficient which is based on a decision tree for ensemble learning for Machine Learning that uses the distributed gradient boosting framework. Machine Learning algorithms are implemented with XGBoost under the Gradient boosting framework. XGBoost is capable of solving data science problems accurately in a short duration with its parallel tree boosting which is also called Gradient Boosting Machine (GBM), Gradient Boosting Decision Trees (GBDT). It is extremely portable and cross-platform enabled such that the very same code can be run on the different major distributed environments such as Hadoop, MPI, and SGE and enables solving problems with well over billions of examples.


AdaBoost

#artificialintelligence

Boosting refers to any Ensemble method that can combine several weak learners into a strong learner. The general idea of most boosting methods is to train predictors sequentially, each trying to correct its predecessor. There are many boosting methods available, one of the most popular is AdaBoost (Adaptive Boosting). The way for a new predictor to correct its predecessor is to pay a bit more attention to the training instances that the predecessor underfitted. This is the technique used by AdaBoost.


Pushing on Text Readability Assessment: A Transformer Meets Handcrafted Linguistic Features

arXiv.org Artificial Intelligence

We report two essential improvements in readability assessment: 1. three novel features in advanced semantics and 2. the timely evidence that traditional ML models (e.g. Random Forest, using handcrafted features) can combine with transformers (e.g. RoBERTa) to augment model performance. First, we explore suitable transformers and traditional ML models. Then, we extract 255 handcrafted linguistic features using self-developed extraction software. Finally, we assemble those to create several hybrid models, achieving state-of-the-art (SOTA) accuracy on popular datasets in readability assessment. The use of handcrafted features help model performance on smaller datasets. Notably, our RoBERTA-RF-T1 hybrid achieves the near-perfect classification accuracy of 99%, a 20.3% increase from the previous SOTA.


Minimax Rates for STIT and Poisson Hyperplane Random Forests

arXiv.org Machine Learning

In [12], Mourtada, Ga\"{i}ffas and Scornet showed that, under proper tuning of the complexity parameters, random trees and forests built from the Mondrian process in $\mathbb{R}^d$ achieve the minimax rate for $\beta$-H\"{o}lder continuous functions, and random forests achieve the minimax rate for $(1+\beta)$-H\"{o}lder functions in arbitrary dimension. In this work, we show that a much larger class of random forests built from random partitions of $\mathbb{R}^d$ also achieve these minimax rates. This class includes STIT random forests, the most general class of random forests built from a self-similar and stationary partition of $\mathbb{R}^d$ by hyperplane cuts possible, as well as forests derived from Poisson hyperplane tessellations. Our proof technique relies on classical results as well as recent advances on stationary random tessellations in stochastic geometry.


Beyond Discriminant Patterns: On the Robustness of Decision Rule Ensembles

arXiv.org Artificial Intelligence

Local decision rules are commonly understood to be more explainable, due to the local nature of the patterns involved. With numerical optimization methods such as gradient boosting, ensembles of local decision rules can gain good predictive performance on data involving global structure. Meanwhile, machine learning models are being increasingly used to solve problems in high-stake domains including healthcare and finance. Here, there is an emerging consensus regarding the need for practitioners to understand whether and how those models could perform robustly in the deployment environments, in the presence of distributional shifts. Past research on local decision rules has focused mainly on maximizing discriminant patterns, without due consideration of robustness against distributional shifts. In order to fill this gap, we propose a new method to learn and ensemble local decision rules, that are robust both in the training and deployment environments. Specifically, we propose to leverage causal knowledge by regarding the distributional shifts in subpopulations and deployment environments as the results of interventions on the underlying system. We propose two regularization terms based on causal knowledge to search for optimal and stable rules. Experiments on both synthetic and benchmark datasets show that our method is effective and robust against distributional shifts in multiple environments.


Context-aware Retail Product Recommendation with Regularized Gradient Boosting

arXiv.org Artificial Intelligence

In the FARFETCH Fashion Recommendation challenge, the participants needed to predict the order in which various products would be shown to a user in a recommendation impression. The data was provided in two phases - a validation phase and a test phase. The validation phase had a labelled training set that contained a binary column indicating whether a product has been clicked or not. The dataset comprises over 5,000,000 recommendation events, 450,000 products and 230,000 unique users. It represents real, unbiased, but anonymised, interactions of actual users of the FARFETCH platform. The final evaluation was done according to the performance in the second phase. A total of 167 participants participated in the challenge, and we secured the 6th rank during the final evaluation with an MRR of 0.4658 on the test set. We have designed a unique context-aware system that takes the similarity of a product to the user context into account to rank products more effectively. Post evaluation, we have been able to fine-tune our approach with an MRR of 0.4784 on the test set, which would have placed us at the 3rd position.