Goto

Collaborating Authors

Semi-Supervised Multi-Label Learning with Incomplete Labels

AAAI Conferences

The problem of incomplete labels is frequently encountered in many application domains where the training labels are obtained via crowd-sourcing. The label incompleteness significantly increases the difficulty of acquiring accurate multi-label prediction models. In this paper, we propose a novel semi-supervised multi-label method that integrates low-rank label matrix recovery into the manifold regularized vector-valued prediction framework to address multi-label learning with incomplete labels. The proposed method is formulated as a convex but non-smooth joint optimization problem over the latent label matrix and the prediction model parameters. We then develop a fast proximal gradient descent with continuation algorithm to solve it for a global optimal solution. The efficacy of the proposed approach is demonstrated on multiple multi-label datasets, comparing to related methods that handle incomplete labels.


Discriminant Laplacian Embedding

AAAI Conferences

Many real life applications brought by modern technologies often have multiple data sources, which are usually characterized by both attributes and pairwise similarities at the same time. For example in webpage ranking, a webpage is usually represented by a vector of term values, and meanwhile the internet linkages induce pairwise similarities among the webpages. Although both attributes and pairwise similarities are useful for class membership inference, many traditional embedding algorithms only deal with one type of input data. In order to make use of the both types of data simultaneously, in this work, we propose a novel Discriminant Laplacian Embedding (DLE) approach. Supervision information from training data are integrated into DLE to improve the discriminativity of the resulted embedding space. By solving the ambiguity problem in computing the scatter matrices caused by data points with multiple labels, we successfully extend the proposed DLE to multi-label classification. In addition, through incorporating the label correlations, the classification performance using multi-label DLE is further enhanced. Promising experimental results in extensive empirical evaluations have demonstrated the effectiveness of our approaches.


Vector-Valued Multi-View Semi-Supervsed Learning for Multi-Label Image Classification

AAAI Conferences

Images are usually associated with multiple labels and comprised of multiple views, due to each image containing several objects (e.g. a pedestrian, bicycle and tree) and multiple visual features (e.g. color, texture and shape). Currently available tools tend to use either labels or features for classification, but both are necessary to describe the image properly. There have been recent successes in using vector-valued functions, which construct matrix-valued kernels, to explore the multi-label structure in the output space. This has motivated us to develop multi-view vector-valued manifold regularization (MV$^3$MR) in order to integrate multiple features. MV$^3$MR exploits the complementary properties of different features, and discovers the intrinsic local geometry of the compact support shared by different features, under the theme of manifold regularization. We validate the effectiveness of the proposed MV$^3$MR methodology for image classification by conducting extensive experiments on two challenge datasets, PASCAL VOC' 07 and MIR Flickr.


Transductive Learning on Adaptive Graphs

AAAI Conferences

Graph-based semi-supervised learning methods are based on some smoothness assumption about the data. As a discrete approximation of the data manifold, the graph plays a crucial role in the success of such graph-based methods. In most existing methods, graph construction makes use of a predefined weighting function without utilizing label information even when it is available. In this work, by incorporating label information, we seek to enhance the performance of graph-based semi-supervised learning by learning the graph and label inference simultaneously. In particular, we consider a particular setting of semi-supervised learning called transductive learning. Using the LogDet divergence to define the objective function, we propose an iterative algorithm to solve the optimization problem which has closed-form solution in each step. We perform experiments on both synthetic and real data to demonstrate improvement in the graph and in terms of classification accuracy.


Robust Semi-Supervised Learning through Label Aggregation

AAAI Conferences

Semi-supervised learning is proposed to exploit both labeled and unlabeled data. However, as the scale of data in real world applications increases significantly, conventional semi-supervised algorithms usually lead to massive computational cost and cannot be applied to large scale datasets. In addition, label noise is usually present in the practical applications due to human annotation, which very likely results in remarkable degeneration of performance in semi-supervised methods. To address these two challenges, in this paper, we propose an efficient RObust Semi-Supervised Ensemble Learning (ROSSEL) method, which generates pseudo-labels for unlabeled data using a set of weak annotators, and combines them to approximate the ground-truth labels to assist semi-supervised learning. We formulate the weighted combination process as a multiple label kernel learning (MLKL) problem which can be solved efficiently. Compared with other semi-supervised learning algorithms, the proposed method has linear time complexity. Extensive experiments on five benchmark datasets demonstrate the superior effectiveness, efficiency and robustness of the proposed algorithm.