How to Boost Any Loss Function
–Neural Information Processing Systems
Boosting is a highly successful ML-born optimization setting in which one is required to computationally efficiently learn arbitrarily good models based on the access to a weak learner oracle, providing classifiers performing at least slightly differently from random guessing. A key difference with gradient-based optimization is that boosting's original model does not requires access to first order information about a loss, yet the decades long history of boosting has quickly evolved it into a first order optimization setting - sometimes even wrongfully defining it as such. Owing to recent progress extending gradient-based optimization to use only a loss' zeroth (0
Neural Information Processing Systems
May-28-2025, 16:26:36 GMT
- Country:
- North America > United States > Hawaii (0.14)
- Genre:
- Research Report > Experimental Study (0.92)
- Technology: