Goto

Collaborating Authors

 ranking svm


Smooth Ranking SVM via Cutting-Plane Method

Ozcan, Erhan Can, Görgülü, Berk, Baydogan, Mustafa G., Paschalidis, Ioannis Ch.

arXiv.org Artificial Intelligence

The most popular classification algorithms are designed to maximize classification accuracy during training. However, this strategy may fail in the presence of class imbalance since it is possible to train models with high accuracy by overfitting to the majority class. On the other hand, the Area Under the Curve (AUC) is a widely used metric to compare classification performance of different algorithms when there is a class imbalance, and various approaches focusing on the direct optimization of this metric during training have been proposed. Among them, SVM-based formulations are especially popular as this formulation allows incorporating different regularization strategies easily. In this work, we develop a prototype learning approach that relies on cutting-plane method, similar to Ranking SVM, to maximize AUC. Our algorithm learns simpler models by iteratively introducing cutting planes, thus overfitting is prevented in an unconventional way. Furthermore, it penalizes the changes in the weights at each iteration to avoid large jumps that might be observed in the test performance, thus facilitating a smooth learning process. Based on the experiments conducted on 73 binary classification datasets, our method yields the best test AUC in 25 datasets among its relevant competitors.


Aligning Subjective Ratings in Clinical Decision Making

Pick, Annika, Ginzel, Sebastian, Rüping, Stefan, Sander, Jil, Foldenauer, Ann Christina, Köhm, Michaela

arXiv.org Machine Learning

While objective indicators are more transparent and robust, the subjective evaluation contains a wealth of expert knowledge and intuition. In this work, we demonstrate the potential of pairwise ranking methods to align the subjective evaluation with objective indicators, creating a new score that combines their advantages and facilitates diagnosis. In a case study on patients at risk for developing Psoriatic Arthritis, we illustrate that the resulting score (1) increases classification accuracy when detecting disease presence/absence, (2) is sparse and (3) provides a nuanced assessment of severity for subsequent analysis.


Ball Ranking Machines for Content-Based Multimedia Retrieval

Luo, Dijun (The University of Texas at Arlington) | Huang, Heng (The University of Texas at Arlington)

AAAI Conferences

In this paper, we propose the new Ball Ranking Machines (BRMs) to address the supervised ranking problems. In previous work, supervised ranking methods have been successfully applied in various information retrieval tasks. Among these methodologies, the Ranking Support Vector Machines (Rank SVMs) are well investigated. However, one major fact limiting their applications is that Ranking SVMs need optimize a margin-based objective function over all possible document pairs within all queries on the training set. In consequence, Ranking SVMs need select a large number of support vectors among a huge number of support vector candidates. This paper introduces a new model of of Ranking SVMs and develops an efficient approximation algorithm, which decreases the training time and generates much fewer support vectors. Empirical studies on synthetic data and content-based image/video retrieval data show that our method is comparable to Ranking SVMs in accuracy, but use much fewer ranking support vectors and significantly less training time.


Ranking Measures and Loss Functions in Learning to Rank

Chen, Wei, Liu, Tie-yan, Lan, Yanyan, Ma, Zhi-ming, Li, Hang

Neural Information Processing Systems

Learning to rank has become an important research topic in machine learning. While most learning-to-rank methods learn the ranking function by minimizing the loss functions, it is the ranking measures (such as NDCG and MAP) that are used to evaluate the performance of the learned ranking function. In this work, we reveal the relationship between ranking measures and loss functions in learning-to-rank methods, such as Ranking SVM, RankBoost, RankNet, and ListMLE. We show that these loss functions are upper bounds of the measure-based ranking errors. As a result, the minimization of these loss functions will lead to the maximization of the ranking measures. The key to obtaining this result is to model ranking as a sequence of classification tasks, and define a so-called essential loss as the weighted sum of the classification errors of individual tasks in the sequence. We have proved that the essential loss is both an upper bound of the measure-based ranking errors, and a lower bound of the loss functions in the aforementioned methods. Our proof technique also suggests a way to modify existing loss functions to make them tighter bounds of the measure-based ranking errors. Experimental results on benchmark datasets show that the modifications can lead to better ranking performance, demonstrating the correctness of our analysis.