Goto

Collaborating Authors

 greedymiser


Reviews: Adaptive Classification for Prediction Under a Budget

Neural Information Processing Systems

The paper proposes a framework to train a composite classifier under a test-time budget for feature construction. The classifier consists of a high-performance model, built with no constraints on cost, a low-prediction cost model and a gating function which selects one of the two models to be used for a test sample. The setup of the problem is fairly specific: there are no constraints on complexity or feature computation cost during training time, however, at test time the prediction should be made as well as possible using as few features as possible (with a tradeoff). While several domain where this setup is relevant are mentioned in the intro, no details are given on the intended application until the experiment section. Also, only for one of the datasets on which this was tested (the Yahoo one) is runtime prediction even a consideration.


Reviews: Cost efficient gradient boosting

Neural Information Processing Systems

Thus, the paper is similar to the work of Xu et al., 2012. The main differences are the fact that the feature and evaluation costs are input-specific, the evaluation cost depends on the number of tree splits, their optimization approach is different (based on the Taylor expansion around T_{k-1}, as described in the XGBoost paper), and they use best-first growth to grow the trees to a maximum number of splits (instead of a max depth). The authors point out that their setup works either in the case where feature cost dominates or evaluation cost dominates and they show experimental results for these settings.


Adaptive Classification for Prediction Under a Budget

Nan, Feng, Saligrama, Venkatesh

Neural Information Processing Systems

We propose a novel adaptive approximation approach for test-time resource-constrained prediction motivated by Mobile, IoT, health, security and other applications, where constraints in the form of computation, communication, latency and feature acquisition costs arise. We learn an adaptive low-cost system by training a gating and prediction model that limits utilization of a high-cost model to hard input instances and gates easy-to-handle input instances to a low-cost model. Our method is based on adaptively approximating the high-cost model in regions where low-cost models suffice for making highly accurate predictions. We pose an empirical loss minimization problem with cost constraints to jointly train gating and prediction models. On a number of benchmark datasets our method outperforms state-of-the-art achieving higher accuracy for the same cost.


Cost efficient gradient boosting

Peter, Sven, Diego, Ferran, Hamprecht, Fred A., Nadler, Boaz

Neural Information Processing Systems

Many applications require learning classifiers or regressors that are both accurate and cheap to evaluate. Prediction cost can be drastically reduced if the learned predictor is constructed such that on the majority of the inputs, it uses cheap features and fast evaluations. The main challenge is to do so with little loss in accuracy. In this work we propose a budget-aware strategy based on deep boosted regression trees. In contrast to previous approaches to learning with cost penalties, our method can grow very deep trees that on average are nonetheless cheap to compute. We evaluate our method on a number of datasets and find that it outperforms the current state of the art by a large margin. Our algorithm is easy to implement and its learning time is comparable to that of the original gradient boosting.