Collaborating Authors


Ensemble Methods: A Beginner's Guide


When I started my Data Science journey,few terms like ensemble,boosting often popped up.Whenever I opened the discussion forum of any Kaggle Competition or looked at any winner's solution,it was mostly filled with these things. At first these discussions sounded totally alien,and these class of ensemble models looked like some fancy stuff not meant for the newbies,but trust me once you have a basic understanding behind the concepts you are going to love them! So let's start with a very simple question,What exactly is ensemble? "A group of separate things/people that contribute to a coordinated whole" In a way this is kind of the core idea behind the entire class of ensemble learning! Well let's rewind the clocks a bit and go back to the school days for a while, remember you used to get a report card with an overall grade.Well how exactly was this overall grade calculated,your teachers of respective subjects gave some feedback based on their set of criteria,for example your math teacher would assess you on his own criteria like algebra,trigonometry etc, sports teacher would judge you how you perform on the field,your music teacher would judge on you vocal skills.Point being each of these teachers have their own set of rules of judging the performance of a student and later all of these are combined to give an overall grade on the performance of the student.

[D] What's the best deep-dive explanation of XGBoost hyperparameters out there?


I'm not a total newbie, so I'd thank for all those "how to get started with xgboost" articles which there are plenty of. I remember having bumped into a site or blog with a great and comprehensive summary of each hyperparameter, but I lost that link and can't find it know from search. As far as I remember, it had a hyperparameter menu on the left, probably referred to all boosting trees and their hyperparameters and was created by some women. Anybody can recall that source?

How to Develop an AdaBoost Ensemble in Python


Boosting is a class of ensemble machine learning algorithms that involve combining the predictions from many weak learners. A weak learner is a model that is very simple, although has some skill on the dataset. Boosting was a theoretical concept long before a practical algorithm could be developed, and the AdaBoost (adaptive boosting) algorithm was the first successful approach for the idea. The AdaBoost algorithm involves using very short (one-level) decision trees as weak learners that are added sequentially to the ensemble. Each subsequent model attempts to correct the predictions made by the model before it in the sequence.

Secure Collaborative XGBoost on Encrypted Data


Training a machine learning model requires a large quantity of high-quality data. One way to achieve this is to combine data from many different data organizations or data owners. But data owners are often unwilling to share their data with each other due to privacy concerns, which can stem from business competition, or be a matter of regulatory compliance. The question is: how can we mitigate such privacy concerns? Secure collaborative learning enables many data owners to build robust models on their collective data, but without revealing their data to each other.

Artificial Intelligence revolutionizes the insurance industry


Pricing: Through predictive models (with algorithms such as random forest, linear regression, xgboost, etc.), we can provide insurance premiums in a more dynamic and precise way. More specifically, they can be personalized according to driving habits, geographic area and commute distance. To the traditional price-setting variables, a new set of variables are added to improve the profitability of the portfolio. These variables depend on the company's own needs/capacities and can range from competitors' prices to the policyholder's traffic record, driver's license age, credit score, as well as external data systems and sources. The interesting thing here is the dynamism in determining the price; the models change based on data inputted over time, then recognize patterns and adjust the rate autonomously.

Introducing the open-source Amazon SageMaker XGBoost algorithm container


XGBoost is a popular and efficient machine learning (ML) algorithm for regression and classification tasks on tabular datasets. It implements a technique known as gradient boosting on trees and performs remarkably well in ML competitions. Since its launch, Amazon SageMaker has supported XGBoost as a built-in managed algorithm. For more information, see Simplify machine learning with XGBoost and Amazon SageMaker. As of this writing, you can take advantage of the open-source Amazon SageMaker XGBoost container, which has improved flexibility, scalability, extensibility, and Managed Spot Training.

Do Decision Trees need Feature Scaling?


Machine Learning algorithms have always been on the path towards evolution since its inception. Today the domain has come a long way from mathematical modelling to ensemble modelling and more. This evolution has seen more robust and SOTA models which is almost bridging the gap between potentials capabilities of human and AI. Ensemble modelling has given us one of those SOTA model XGBoost. Recently I happened to participate in a Machine Learning Hiring Challenge where the problem statement was a classification problem.

DriveML: Self-Drive Machine Learning Projects


Implementing some of the pillars of an automated machine learning pipeline such as (i) Automated data preparation, (ii) Feature engineering, (iii) Model building in classification context that includes techniques such as (a) Regularised regression [1], (b) Logistic regression [2], (c) Random Forest [3], (d) Decision tree [4] and (e) Extreme Gradient Boosting (xgboost) [5], and finally, (iv) Model explanation (using lift chart and partial dependency plots). Also provides some additional features such as generating missing at random (MAR) variables and automated exploratory data analysis. Moreover, function exports the model results with the required plots in an HTML vignette report format that follows the best practices of the industry and the academia.

Engineering Blog - Learnings from Distributed XGBoost on Amazon SageMaker


XGBoost is a popular Python library for gradient boosted decision trees. The implementation allows practitioners to distribute training across multiple compute instances (or workers), which is especially useful for large training sets. One tool used at Zalando for deploying production machine learning models is the managed service from Amazon called SageMaker. XGBoost is already included in SageMaker as a built-in algorithm, meaning that a prebuilt docker container is available. This container also supports distributed training, making it easy to scale training jobs across many instances.

Adversarial Validation Approach to Concept Drift Problem in Automated Machine Learning Systems Machine Learning

In automated machine learning systems, concept drift in input data is one of the main challenges. It deteriorates model performance on new data over time. Previous research on concept drift mostly proposed model retraining after observing performance decreases. However, this approach is suboptimal because the system fixes the problem only after suffering from poor performance on new data. Here, we introduce an adversarial validation approach to concept drift problems in automated machine learning systems. With our approach, the system detects concept drift in new data before making inference, trains a model, and produces predictions adapted to the new data. We show that our approach addresses concept drift effectively with the AutoML3 Lifelong Machine Learning challenge data as well as in Uber's internal automated machine learning system, MaLTA.