Seeing Numbers: Bayesian Optimisation of a LightGBM model
In a classic case of "be careful what you search for," reading a couple of online articles on model hyper-parameter optimisation has lead to my news feed being bombarded with how-to guides guaranteeing "the most powerful model possible" "in a few easy steps." What I do notice however, is that few articles actually mention that hyper-parameter tuning is only part of the process and is not a silver bullet solution for predictive power. Even fewer articles mention that gains in predictive power from hyper-parameter optimisation are modest and are likely less than gains from decent feature engineering. LightGBM is a gradient boosting framework which uses tree-based learning algorithms. It is an example of an ensemble technique which combines weak individual models to form a single accurate model.
Aug-6-2021, 09:16:00 GMT
- Technology: