Goto

Collaborating Authors

Improving Label Ranking Ensembles using Boosting Techniques

arXiv.org Machine Learning

Label ranking is a prediction task which deals with learning a mapping between an instance and a ranking (i.e., order) of labels from a finite set, representing their relevance to the instance. Boosting is a well-known and reliable ensemble technique that was shown to often outperform other learning algorithms. While boosting algorithms were developed for a multitude of machine learning tasks, label ranking tasks were overlooked. In this paper, we propose a boosting algorithm which was specifically designed for label ranking tasks. Extensive evaluation of the proposed algorithm on 24 semi-synthetic and real-world label ranking datasets shows that it significantly outperforms existing state-of-the-art label ranking algorithms.


Xu

AAAI Conferences

Multi-label learning methods assign multiple labels to one object. In practice, in addition to differentiating relevant labels from irrelevant ones, it is often desired to rank the relevant labels for an object, whereas the rankings of irrelevant labels are not important. Such a requirement, however, cannot be met because most existing methods were designed to optimize existing criteria, yet there is no criterion which encodes the aforementioned requirement. In this paper, we present a new criterion, Pro Loss, concerning the prediction on all labels as well as the rankings of only relevant labels. We then propose ProSVM which optimizes Pro Lossefficiently using alternating direction method of multipliers. We further improve its efficiency with an upper approximation that reduces the number of constraints from O(T,2) to O(T), where T is the number of labels. Experiments show that our proposals are not only superior on Pro Loss, but also highly competitive on existing evaluation criteria.


Wang

AAAI Conferences

Label ranking aims to map instances to an order over a predefined set of labels. It is ideal that the label ranking model is trained by directly maximizing performance measures on training data. However, existing studies on label ranking models mainly based on the minimization of classification errors or rank losses. To fill in this gap in label ranking, in this paper a novel label ranking model is learned by minimizing a loss function directly defined on the performance measures. The proposed algorithm, referred to as BoostLR, employs a boosting framework and utilizes the rank aggregation technique to construct weak label rankers. Experimental results reveal the initial success of BoostLR.


Label Ranking with Partial Abstention based on Thresholded Probabilistic Models

Neural Information Processing Systems

Several machine learning methods allow for abstaining from uncertain predictions. While being common for settings like conventional classification, abstention has been studied much less in learning to rank. We address abstention for the label ranking setting, allowing the learner to declare certain pairs of labels as being incomparable and, thus, to predict partial instead of total orders. In our method, such predictions are produced via thresholding the probabilities of pairwise preferences between labels, as induced by a predicted probability distribution on the set of all rankings. We formally analyze this approach for the Mallows and the Plackett-Luce model, showing that it produces proper partial orders as predictions and characterizing the expressiveness of the induced class of partial orders.


ROAR: Robust Label Ranking for Social Emotion Mining

AAAI Conferences

Understanding and predicting latent emotions of users toward online contents, known as social emotion mining, has become increasingly important to both social platforms and businesses alike. Despite recent developments, however, very little attention has been made to the issues of nuance, subjectivity, and bias of social emotions. In this paper, we fill this gap by formulating social emotion mining as a robust label ranking problem, and propose: (1) a robust measure, named as G-mean-rank (GMR), which sets a formal criterion consistent with practical intuition; and (2) a simple yet effective label ranking model, named as ROAR, that is more robust toward unbalanced datasets (which are common). Through comprehensive empirical validation using 4 real datasets and 16 benchmark semi-synthetic label ranking datasets, and a case study, we demonstrate the superiorities of our proposals over 2 popular label ranking measures and 6 competing label ranking algorithms. The datasets and implementations used in the empirical validation are available for access.