Goto

Collaborating Authors

Smart Multi-Task Bregman Clustering and Multi-Task Kernel Clustering

AAAI Conferences

Multitask Bregman Clustering (MBC) alternatively updates clusters and learns relationship between clusters of different tasks, and the two phases boost each other. However, the boosting does not always have positive effect, it may also cause negative effect. Another issue of MBC is that it cannot deal with nonlinear separable data. In this paper, we show that MBC's process of using cluster relationship to boost the updating clusters phase may cause negative effect, i.e., cluster centroid may be skewed under some conditions. We propose a smart multi-task Bregman clustering (S-MBC) algorithm which identifies negative effect of the boosting and avoids the negative effect if it occurs. We then extend the framework of S-MBC to a smart multi-task kernel clustering (S-MKC) framework to deal with nonlinear separable data. We also propose a specific implementation of the framework which could be applied to any Mercer kernel. Experimental results confirm our analysis, and demonstrate the superiority of our proposed methods.


Multi-Stage Multi-Task Feature Learning

Neural Information Processing Systems

Multi-task sparse feature learning aims to improve the generalization performance by exploiting the shared features among tasks. It has been successfully applied to many applications including computer vision and biomedical informatics. In this paper, we propose a non-convex formulation for multi-task sparse feature learning based on a novel regularizer. To solve the non-convex optimization problem, we propose a Multi-Stage Multi-Task Feature Learning (MSMTFL) algorithm. Moreover, we present a detailed theoretical analysis showing that MSMTFL achieves a better parameter estimation error bound than the convex formulation.


Large Margin Multi-Task Metric Learning

Neural Information Processing Systems

Multi-task learning (MTL) improves the prediction performance on multiple, different but related, learning problems through shared parameters or representations. One of the most prominent multi-task learning algorithms is an extension to svms by Evgeniou et al. Although very elegant, multi-task svm is inherently restricted by the fact that support vector machines require each class to be addressed explicitly with its own weight vector which, in a multi-task setting, requires the different learning tasks to share the same set of classes. This paper proposes an alternative formulation for multi-task learning by extending the recently published large margin nearest neighbor (lmnn) algorithm to the MTL paradigm. Instead of relying on separating hyperplanes, its decision function is based on the nearest neighbor rule which inherently extends to many classes and becomes a natural fit for multitask learning.


Probabilistic Multi-Task Feature Selection

Neural Information Processing Systems

Recently, some variants of the $l_1$ norm, particularly matrix norms such as the $l_{1,2}$ and $l_{1,\infty}$ norms, have been widely used in multi-task learning, compressed sensing and other related areas to enforce sparsity via joint regularization. In this paper, we unify the $l_{1,2}$ and $l_{1,\infty}$ norms by considering a family of $l_{1,q}$ norms for $1 q\le\infty$ and study the problem of determining the most appropriate sparsity enforcing norm to use in the context of multi-task feature selection. Based on this probabilistic interpretation, we develop a probabilistic model using the noninformative Jeffreys prior. We also extend the model to learn and exploit more general types of pairwise relationships between tasks. For both versions of the model, we devise expectation-maximization (EM) algorithms to learn all model parameters, including $q$, automatically.


Li

AAAI Conferences

Multi-task learning is a paradigm, where multiple tasks are jointly learnt. Previous multi-task learning models usually treat all tasks and instances per task equally during learning. Inspired by the fact that humans often learn from easy concepts to hard ones in the cognitive process, in this paper, we propose a novel multi-task learning framework that attempts to learn the tasks by simultaneously taking into consideration the complexities of both tasks and instances per task. We propose a novel formulation by presenting a new task-oriented regularizer that can jointly prioritize tasks and instances.Thus it can be interpreted as a self-paced learner for multi-task learning. An efficient block coordinate descent algorithm is developed to solve the proposed objective function, and the convergence of the algorithm can be guaranteed. Experimental results on the toy and real-world datasets demonstrate the effectiveness of the proposed approach, compared to the state-of-the-arts.