ucb
- North America > Canada > Ontario > Toronto (0.04)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- Asia > Middle East > Jordan (0.04)
- North America > United States > Arizona > Maricopa County > Phoenix (0.04)
- Asia > Middle East > Israel > Tel Aviv District > Tel Aviv (0.04)
- North America > United States > Oregon > Multnomah County > Portland (0.04)
- (3 more...)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Data Science > Data Mining > Big Data (0.46)
Active Bipartite Ranking
V arious dedicated algorithms have been recently proposed and studied by the machine-learning community. In contrast, active bipartite ranking rule is poorly documented in the literature. Due to its global nature, a strategy for labeling sequentially data points that are difficult to rank w.r.t. to the others is
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Oceania > Australia (0.14)
- North America > United States (0.05)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Europe > Sweden > Stockholm > Stockholm (0.04)
- North America > United States > Wisconsin > Dane County > Madison (0.14)
- North America > United States > California > San Diego County > San Diego (0.04)
bbc92a647199b832ec90d7cf57074e9e-Supplemental.pdf
Before defining our algorithm at each iterationt we first lighten our notation with a shorthandba(X) = b(ˆp(t 1)(X),a) (at different iterationt, ba denotes different functions), andb(X) is the vector of (b1(X),,bK(X)). For the intuition of the algorithm, consider the t-th iteration where the current prediction function is ˆp(t 1). Thestatement of the theorem is identical; the proof is also essentially the same except for the use of some new technicaltools. Conversely, if ˆp is LB decision calibrated, then kE[p (X) ˆp(X)|U]k1 = 0 almost surely (because if the expectation of a non-negative random variable is zero, the random variable must be zero almost surely), which implies thatˆp is distributioncalibrated. For BKa we use the VC dimension approach.