Goto

Collaborating Authors

Ensemble Learning


A Journey through XGBoost: Milestone 1

#artificialintelligence

Welcome to another article series! This time, we are discussing XGBoost (Extreme Gradient Boosting) -- The leading and the most preferred machine learning algorithm among data scientists in the 21st century. Most people say XGBoost is a money-making algorithm because it easily outperforms any other algorithms, gives the best possible scores and helps its users to claim luxury cash prizes from data science competitions. The topic we are discussing is broad and important so that we discuss it through a series of articles. It is like a journey, maybe a long journey for newcomers.


The Glory of XGBoost

#artificialintelligence

There are so many machine learning algorithms out there, how do you choose the best one for your problem? This question is going to have a different response based on the application and the data. Is it classification, regression, supervised, unsupervised, natural language processing, time series? There are so many avenues to take but in this article I am going to focus on on algorithm that I particularly find very interesting, XGBoost. XGBoost stands for extreme gradient boosting and is an open source library that provides an efficient and effective implementation of gradient boosting.


Machine Learning May Reduce Mental Health Misdiagnosis

#artificialintelligence

Depressive episodes in bipolar disorder can be indistinguishable from those in major depressive disorder, leading to misdiagnosis and poor subsequent outcomes. Approximately 40% of patients with bipolar disorder are initially diagnosed with major depressive disorder; average delay in bipolar diagnosis ranges from 5.7 to 7.5 years. In conjunction with data from self-reports and blood biomarker data, a machine learning algorithm called Extreme Gradient Boosting (XGBoost) was able to distinguish between bipolar disorder and major depressive disorder. The predictive capabilities of artificial intelligence (AI) can assist researchers and clinicians in disciplines characterized by complexity and nuance. AI machine learning is increasingly being used in life sciences, biotechnology, and mental health.


Accurate classification of COVID‐19 patients with different severity via machine learning

#artificialintelligence

Infection of severe acute respiratory syndrome coronavirus 2 (SARS‐CoV‐2) could cause dramatic response in coronavirus disease 2019 (COVID‐19) patients at multi‐omics level,1-3 thus it is essential to systematically assess the pathogenesis of COVID‐19. In our previous study, we presented the first trans‐omics landscape of 236 COVID‐19 patients with 4 clinical severity groups (including asymptomatic, mild, severe and critically ill cases) and found that the mild and severe COVID‐19 patients shared several similar characteristics.4 However, it is crucial to discriminate mild from severe COVID‐19 patients to prevent the latter from the progression of disease by facilitating early intervention. Herein, we developed an extreme gradient boosting (XGBoost) machine‐learning model to predict the COVID‐19 severities by leveraging multi‐omics data. Briefly, we randomly stratified samples for the training set (80%) and the independent testing set (20%) (Figure 1A, see Methods in the Supporting Information).


How to use PyCaret -- the library for lazy data scientists

#artificialintelligence

When we approach supervised machine learning problems, it can be tempting to just see how a random forest or gradient boosting model performs and stop experimenting if we are satisfied with the results. What if you could compare many different models with just one line of code? What if you could reduce each step of the data science process from feature engineering to model deployment to just a few lines of code? This is exactly where PyCaret comes into play. PyCaret is a high-level, low-code Python library that makes it easy to compare, train, evaluate, tune, and deploy machine learning models with only a few lines of code.


Causal Gradient Boosting: Boosted Instrumental Variable Regression

arXiv.org Machine Learning

Recent advances in the literature have demonstrated that standard supervised learning algorithms are ill-suited for problems with endogenous explanatory variables. To correct for the endogeneity bias, many variants of nonparameteric instrumental variable regression methods have been developed. In this paper, we propose an alternative algorithm called boostIV that builds on the traditional gradient boosting algorithm and corrects for the endogeneity bias. The algorithm is very intuitive and resembles an iterative version of the standard 2SLS estimator. Moreover, our approach is data driven, meaning that the researcher does not have to make a stance on neither the form of the target function approximation nor the choice of instruments. We demonstrate that our estimator is consistent under mild conditions. We carry out extensive Monte Carlo simulations to demonstrate the finite sample performance of our algorithm compared to other recently developed methods. We show that boostIV is at worst on par with the existing methods and on average significantly outperforms them.


Hyperboost: Hyperparameter Optimization by Gradient Boosting surrogate models

arXiv.org Machine Learning

Bayesian Optimization is a popular tool for tuning algorithms in automatic machine learning (AutoML) systems. Current state-of-the-art methods leverage Random Forests or Gaussian processes to build a surrogate model that predicts algorithm performance given a certain set of hyperparameter settings. In this paper, we propose a new surrogate model based on gradient boosting, where we use quantile regression to provide optimistic estimates of the performance of an unobserved hyperparameter setting, and combine this with a distance metric between unobserved and observed hyperparameter settings to help regulate exploration. We demonstrate empirically that the new method is able to outperform some state-of-the art techniques across a reasonable sized set of classification problems.


(Decision and regression) tree ensemble based kernels for regression and classification

arXiv.org Machine Learning

Tree based ensembles such as Breiman's random forest (RF) and Gradient Boosted Trees (GBT) can be interpreted as implicit kernel generators, where the ensuing proximity matrix represents the data-driven tree ensemble kernel. Kernel perspective on the RF has been used to develop a principled framework for theoretical investigation of its statistical properties. Recently, it has been shown that the kernel interpretation is germane to other tree-based ensembles e.g. GBTs. However, practical utility of the links between kernels and the tree ensembles has not been widely explored and systematically evaluated. Focus of our work is investigation of the interplay between kernel methods and the tree based ensembles including the RF and GBT. We elucidate the performance and properties of the RF and GBT based kernels in a comprehensive simulation study comprising of continuous and binary targets. We show that for continuous targets, the RF/GBT kernels are competitive to their respective ensembles in higher dimensional scenarios, particularly in cases with larger number of noisy features. For the binary target, the RF/GBT kernels and their respective ensembles exhibit comparable performance. We provide the results from real life data sets for regression and classification to show how these insights may be leveraged in practice. Overall, our results support the tree ensemble based kernels as a valuable addition to the practitioner's toolbox. Finally, we discuss extensions of the tree ensemble based kernels for survival targets, interpretable prototype and landmarking classification and regression. We outline future line of research for kernels furnished by Bayesian counterparts of the frequentist tree ensembles.


Automatic detection of abnormal EEG signals using wavelet feature extraction and gradient boosting decision tree

arXiv.org Artificial Intelligence

Electroencephalography is frequently used for diagnostic evaluation of various brain-related disorders due to its excellent resolution, non-invasive nature and low cost. However, manual analysis of EEG signals could be strenuous and a time-consuming process for experts. It requires long training time for physicians to develop expertise in it and additionally experts have low inter-rater agreement (IRA) among themselves. Therefore, many Computer Aided Diagnostic (CAD) based studies have considered the automation of interpreting EEG signals to alleviate the workload and support the final diagnosis. In this paper, we present an automatic binary classification framework for brain signals in multichannel EEG recordings. We propose to use Wavelet Packet Decomposition (WPD) techniques to decompose the EEG signals into frequency sub-bands and extract a set of statistical features from each of the selected coefficients. Moreover, we propose a novel method to reduce the dimension of the feature space without compromising the quality of the extracted features. The extracted features are classified using different Gradient Boosting Decision Tree (GBDT) based classification frameworks, which are CatBoost, XGBoost and LightGBM. We used Temple University Hospital EEG Abnormal Corpus V2.0.0 to test our proposed technique. We found that CatBoost classifier achieves the binary classification accuracy of 87.68%, and outperforms state-of-the-art techniques on the same dataset by more than 1% in accuracy and more than 3% in sensitivity. The obtained results in this research provide important insights into the usefulness of WPD feature extraction and GBDT classifiers for EEG classification.


Implementing the AdaBoost Algorithm From Scratch - KDnuggets

#artificialintelligence

Boosting is an ensemble technique that attempts to create strong classifiers from a number of weak classifiers. Unlike many machine learning models which focus on high quality prediction done using single model, boosting algorithms seek to improve the prediction power by training a sequence of weak models, each compensating the weaknesses of its predecessors. Boosting grants power to machine learning models to improve their accuracy of prediction. AdaBoost, short for Adaptive Boosting, is a machine learning algorithm formulated by Yoav Freund and Robert Schapire. AdaBoost technique follows a decision tree model with a depth equal to one.