Goto

Collaborating Authors

 rank pruning


Rank Pruning for Dominance Queries in CP-Nets

Laing, Kathryn, Thwaites, Peter Adam, Gosling, John Paul

Journal of Artificial Intelligence Research

Conditional preference networks (CP-nets) are a graphical representation of a person's (conditional) preferences over a set of discrete features. In this paper, we introduce a novel method of quantifying preference for any given outcome based on a CP-net representation of a user's preferences. We demonstrate that these values are useful for reasoning about user preferences. In particular, they allow us to order (any subset of) the possible outcomes in accordance with the user's preferences. Further, these values can be used to improve the efficiency of outcome dominance testing. That is, given a pair of outcomes, we can determine which the user prefers more efficiently. Through experimental results, we show that this method is more effective than existing techniques for improving dominance testing efficiency. We show that the above results also hold for CP-nets that express indifference between variable values.


Rank Pruning for Dominance Queries in CP-Nets

Laing, Kathryn, Thwaites, Peter Adam, Gosling, John Paul

arXiv.org Artificial Intelligence

Conditional preference networks (CP-nets) are a graphical representation of a person's (conditional) preferences over a set of discrete variables. In this paper, we introduce a novel method of quantifying preference for any given outcome based on a CP-net representation of a user's preferences. We demonstrate that these values are useful for reasoning about user preferences. In particular, they allow us to order (any subset of) the possible outcomes in accordance with the user's preferences. Further, these values can be used to improve the efficiency of outcome dominance testing. That is, given a pair of outcomes, we can determine which the user prefers more efficiently. Through experimental results, we show that this method is more effective than existing techniques for improving dominance testing efficiency. We show that the above results also hold for CP-nets that express indifference between variable values.


Learning with Confident Examples: Rank Pruning for Robust Classification with Noisy Labels

Northcutt, Curtis G., Wu, Tailin, Chuang, Isaac L.

arXiv.org Machine Learning

Noisy PN learning is the problem of binary classification when training examples may be mislabeled (flipped) uniformly with noise rate rho1 for positive examples and rho0 for negative examples. We propose Rank Pruning (RP) to solve noisy PN learning and the open problem of estimating the noise rates, i.e. the fraction of wrong positive and negative labels. Unlike prior solutions, RP is time-efficient and general, requiring O(T) for any unrestricted choice of probabilistic classifier with T fitting time. We prove RP has consistent noise estimation and equivalent expected risk as learning with uncorrupted labels in ideal conditions, and derive closed-form solutions when conditions are non-ideal. RP achieves state-of-the-art noise estimation and F1, error, and AUC-PR for both MNIST and CIFAR datasets, regardless of the amount of noise and performs similarly impressively when a large portion of training examples are noise drawn from a third distribution. To highlight, RP with a CNN classifier can predict if an MNIST digit is a "one"or "not" with only 0.25% error, and 0.46 error across all digits, even when 50% of positive examples are mislabeled and 50% of observed positive labels are mislabeled negative examples.