Goto

Collaborating Authors

Avoiding False Positive in Multi-Instance Learning

Neural Information Processing Systems

In multi-instance learning, there are two kinds of prediction failure, i.e., false negative and false positive. Current research mainly focus on avoding the former. We attempt to utilize the geometric distribution of instances inside positive bags to avoid both the former and the latter. Based on kernel principal component analysis, we define a projection constraint for each positive bag to classify its constituent instances far away from the separating hyperplane while place positive instances and negative instances at opposite sides. We apply the Constrained Concave-Convex Procedure to solve the resulted problem.


Positive Semidefinite Metric Learning with Boosting

Neural Information Processing Systems

The learning of appropriate distance metrics is a critical problem in classification. In this work, we propose a boosting-based technique, termed BoostMetric, for learning a Mahalanobis distance metric. One of the primary difficulties in learning such a metric is to ensure that the Mahalanobis matrix remains positive semidefinite. Semidefinite programming is sometimes used to enforce this constraint, but does not scale well. BoostMetric is instead based on a key observation that any positive semidefinite matrix can be decomposed into a linear positive combination of trace-one rank-one matrices.


How Are Precision and Recall Calculated?

#artificialintelligence

Calculating precision and recall is actually quite easy. Imagine there are 100 positive cases among 10,000 cases. You want to predict which ones are positive, and you pick 200 to have a better chance of catching many of the 100 positive cases. You record the IDs of your predictions, and when you get the actual results you sum up how many times you were right or wrong.


Kernel Stein Tests for Multiple Model Comparison

Neural Information Processing Systems

We address the problem of non-parametric multiple model comparison: given $l$ candidate models, decide whether each candidate is as good as the best one(s) or worse than it. We propose two statistical tests, each controlling a different notion of decision errors. The first test, building on the post selection inference framework, provably controls the number of best models that are wrongly declared worse (false positive rate). The second test is based on multiple correction, and controls the proportion of the models declared worse but are in fact as good as the best (false discovery rate). We prove that under appropriate conditions the first test can yield a higher true positive rate than the second.


Precision and Recall

#artificialintelligence

Imagine a machine learning algorithm is tasked with identifying the number of bananas within a bowl of fruit. In total, the bowl contains 10 pieces of fruit, 4 of which are bananas, and 6 are apples. The algorithm determines that there are 5 bananas, and 5 apples. The number of bananas that were counted correctly are known as true positives, while the items that were identified incorrectly as bananas are called false positives. In this example, there are 4 true positives, and one false positive, making the algorithms precision 4/5, and its recall is 4/10.