Ensemble Learning


Machine learning identifies patients in need of end-of-life planning

#artificialintelligence

Penn Medicine researchers have developed a machine learning algorithm that identifies oncology patients at risk of short-term mortality who need end-of-life conversations with clinicians. In a study of 26,525 patients receiving outpatient oncology care, the algorithm accurately predicted patients with cancer who were at risk of six-month mortality using electronic health records, including whether a patient had high blood pressure as well as laboratory and electrocardiogram data. The study found that 51 percent of the patients the algorithm identified as "high priority" for end-of-life conversations died within six months vs. fewer than 4 percent in the "lower priority" group. "Our findings suggest that ML tools hold promise for integration into clinical workflows to ensure that patients with cancer have timely conversations about their goals and values," concludes the study, which was published in the journal JAMA Network Open. Initially, researchers developed, validated and compared three ML models--gradient boosting, logistic regression and random forest--to estimate six-month mortality among patients seen in oncology clinics affiliated with a large academic cancer center.


Randomization as Regularization: A Degrees of Freedom Explanation for Random Forest Success

#artificialintelligence

The sustained success random forests has led naturally to the desire to better understand the statistical and mathematical properties of the procedure. Lin and Jeon (2006) introduced the potential nearest neighbor framework and Biau and Devroye (2010) later established related consistency properties. In the last several years, a number of important statistical properties of random forests have also been established whenever base learners are constructed with subsamples rather than bootstrap samples. Scornet et al. (2015) provided the first consistency result for Breiman's original random forest algorithm whenever the true underlying regression function is assumed to be additive. Despite the impressive volume of research from the past two decades and the exciting recent progress in establishing their statistical properties, a satisfying explanation for the sustained empirical success of random forests has yet to be provided.


A Comprehensive Guide to Random Forest in R

#artificialintelligence

Classification is the method of predicting the class of a given input data point. Classification problems are common in machine learning and they fall under the Supervised learning method.


Why You Should Build XGBoost Models Within H2O - Sefik Ilkin Serengil

#artificialintelligence

XGBoost triggered the rise of the tree based models in the machine learning world. It earns reputation with its robust models. Its built models mostly get almost 2% more accuracy. On the other hand, it is a fact that XGBoost is almost 10 times slower than LightGBM. Speed means a lot in a data challenge.


New version of SageMaker XGBoost algorithm available

#artificialintelligence

Customers can now use a new version of the SageMaker XGBoost algorithm that is based on version 0.90 of the open-sourced XGBoost framework. XGBoost is a highly efficient and flexible algorithm for problems in regression, classification, and ranking.


XGBoost in Amazon SageMaker

#artificialintelligence

SageMaker is Amazon Web Services' (AWS) machine learning platform that works in the cloud. It is fully-managed and allows one to perform an entire data science workflow on the platform. And in this post, I will show you how to call your data from AWS S3, upload your data into S3 and bypassing local storage, train a model, deploy an endpoint, perform predictions, and perform hyperparameter tuning. The data cleaning and feature engineering code are derived from this blog post, which is written by Andrew Long, who gave full permission to use his code. The dataset can be found here.


SAS Tutorial How to train forest models in SAS

#artificialintelligence

In this SAS How To Tutorial, Cat Truxillo shows you how to train forest models in SAS. There are multiple ways to train forest models. Cat will show you how to train a forest using two different point-and-click methods. The first method uses SAS Visual Analytics while in the second example, Cat trains a forest in Model Studio, using SAS Viya. Before diving into the examples of how to create a forest model, Cat explains random forest and answers the question "what are random forests?".


Minimal Variance Sampling in Stochastic Gradient Boosting

arXiv.org Machine Learning

Stochastic Gradient Boosting (SGB) is a widely used approach to regularization of boosting models based on decision trees. It was shown that, in many cases, random sampling at each iteration can lead to better generalization performance of the model and can also decrease the learning time. Different sampling approaches were proposed, where probabilities are not uniform, and it is not currently clear which approach is the most effective. In this paper, we formulate the problem of randomization in SGB in terms of optimization of sampling probabilities to maximize the estimation accuracy of split scoring used to train decision trees. This optimization problem has a closed-form nearly optimal solution, and it leads to a new sampling technique, which we call Minimal Variance Sampling (MVS). The method both decreases the number of examples needed for each iteration of boosting and increases the quality of the model significantly as compared to the state-of-the art sampling methods. The superiority of the algorithm was confirmed by introducing MVS as a new default option for subsampling in CatBoost, a gradient boosting library achieving state-of-the-art quality on various machine learning tasks.


[Webinar] Introduction to AutoML: A Hands-On Experience with H2O AutoML

#artificialintelligence

This episode, we are going to mention AutoML concept. Automated Machine Learning or shortly AutoML offers you to skip designing steps in machine learning including algorithm selection, designing the model and tuning hyperparameters. It can build transcendental machine learning models. The longer time you provide, the better it is. We will also have a hands-on experience with H2O AutoML.


Things I learned about Random Forest Machine Learning Algorithm

#artificialintelligence

On a meetup that I attended a couple of months ago in Sydney, I was introduced to an online machine learning course by fast.ai. I never paid any attention to it then. This week, while working on a Kaggle competition, and looking for ways to improve my score, I came across this course again. I decided to give it a try. Here is what I learned from the first lecture, which is a 1 hour 17 minutes video on INTRODUCTION TO RANDOM FOREST.