Learning Near-optimal Convex Combinations of Basis Models with Generalization Guarantees
Nguyen, Tan, Ye, Nan, Bartlett, Peter L.
The problem of learning an optimal convex combination of basis models has been studied in a number of works, with a focus on the theoretical analysis, but little investigati on on the empirical performance of the approach. In this paper, we present some new theoretical insights, and empirical resul ts that demonstrate the effectiveness of the approach. Theore ti-cally, we first consider whether we can replace convex combinations by linear combinations, and obtain convergence r e-sults similar to existing results for learning from a convex hull. We present a negative result showing that the linear hull of very simple basis functions can have unbounded capacity, an d is thus prone to overfitting. On the other hand, convex hulls are still rich but have bounded capacities. In addition, we o b-tain a generalization bound for a general class of Lipschitz loss functions. Empirically, we first discuss how a convex combination can be greedily learned with early stopping, an d how a convex combination can be non-greedily learned when the number of basis models is known a priori. Our experiments suggest that the greedy scheme is competitive with or better than several baselines, including boosting and rand om forests. The greedy algorithm requires little effort in hyp er-parameter tuning, and also seems to adapt to the underlying complexity of the problem.
Oct-8-2019
- Country:
- North America > United States (0.14)
- Oceania > Australia (0.14)
- Genre:
- Research Report > New Finding (0.66)
- Technology: