Collaborating Authors

On Education Decision Trees, Random Forests, AdaBoost & XGBoost in Python - all courses


Get a solid understanding of decision tree Understand the business scenarios where decision tree is applicable Tune a machine learning model's hyperparameters and evaluate its performance. Use Pandas DataFrames to manipulate data and make statistical computations. Use decision trees to make predictions Learn the advantage and disadvantages of the different algorithms Students will need to install Python and Anaconda software but we have a separate lecture to help you install the same You're looking for a complete Decision tree course that teaches you everything you need to create a Decision tree/ Random Forest/ XGBoost model in Python, right? You've found the right Decision Trees and tree based advanced techniques course! After completing this course you will be able to: Identify the business problem which can be solved using Decision tree/ Random Forest/ XGBoost of Machine Learning.

Gradient boosting machine with partially randomized decision trees Machine Learning

The gradient boosting machine is a powerful ensemble-based machine learning method for solving regression problems. However, one of the difficulties of its using is a possible discontinuity of the regression function, which arises when regions of training data are not densely covered by training points. In order to overcome this difficulty and to reduce the computational complexity of the gradient boosting machine, we propose to apply the partially randomized trees which can be regarded as a special case of the extremely randomized trees applied to the gradient boosting. The gradient boosting machine with the partially randomized trees is illustrated by means of many numerical examples using synthetic and real data.

Formal Verification of Input-Output Mappings of Tree Ensembles Machine Learning

Recent advances in machine learning and artificial intelligence are now being considered in safety-critical autonomous systems where software defects may cause severe harm to humans and the environment. Design organizations in these domains are currently unable to provide convincing arguments that their systems are safe to operate when machine learning algorithms are used to implement their software. In this paper, we present an efficient method to extract equivalence classes from decision trees and tree ensembles, and to formally verify that their input-output mappings comply with requirements. The idea is that, given that safety requirements can be traced to desirable properties on system input-output patterns, we can use positive verification outcomes in safety arguments. This paper presents the implementation of the method in the tool VoTE (Verifier of Tree Ensembles), and evaluates its scalability on two case studies presented in current literature. We demonstrate that our method is practical for tree ensembles trained on low-dimensional data with up to 25 decision trees and tree depths of up to 20. Our work also studies the limitations of the method with high-dimensional data and preliminarily investigates the trade-off between large number of trees and time taken for verification.