Mastering XGBoost

#artificialintelligence 

In the case of XGBoost, it is more useful to discuss hyperparameter tuning than the underlying mathematics because hyperparameter tuning is unusually complex, time-consuming, and necessary for deployment, whereas the mathematics are already embedded in the code libraries. While manual hyperparameter tuning is essential and time-consuming in many machine learning algorithms or models, it is especially so in XGBoost. Therefore, while this section focuses on identifying a key element to deploying XGBoost -- in our case study and example here to predict new fashions ("fast fashion") to gain competitive advantage in online apparel sales -- these hyperparameter tuning lessons are valid for all applications of XGBoost, and many other machine learning model applications herein also. The distinction and roles of parameters and hyperparameters is critical to affordable, timely, and accurate machine learning deployments. A core benefit to machine learning is its ability to discover and identify patterns and regularities in Big Data by automatically tuning many thousands or millions of "learnable" parameters. For example, in tree-based models like XGBoost (and decision trees and random forests), these learnable parameters are how many decision variables are at each node.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found