Xue, Gui-Rong
Heterogeneous Transfer Learning for Image Classification
Zhu, Yin (Hong Kong University of Science and Technology) | Chen, Yuqiang (Shanghai Jiao Tong University) | Lu, Zhongqi (†Hong Kong University of Science and Technology) | Pan, Sinno Jialin (Institute for Infocomm Research) | Xue, Gui-Rong (Shanghai Jiao Tong University) | Yu, Yong (Shanghai Jiao Tong University) | Yang, Qiang (Hong Kong University of Science and Technology)
Transfer learning as a new machine learning paradigm has gained increasing attention lately. In situations where the training data in a target domain are not sufficient to learn predictive models effectively, transfer learning leverages auxiliary source data from other related source domains for learning. While most of the existing works in this area only focused on using the source data with the same structure as the target data, in this paper, we push this boundary further by proposing a heterogeneous transfer learning framework for knowledge transfer between text and images. We observe that for a target-domain classification problem, some annotated images can be found on many social Web sites, which can serve as a bridge to transfer knowledge from the abundant text documents available over the Web. A key question is how to effectively transfer the knowledge in the source data even though the text can be arbitrarily found. Our solution is to enrich the representation of the target images with semantic concepts extracted from the auxiliary source data through a novel matrix factorization method. By using the latent semantic features generated by the auxiliary data, we are able to build a better integrated image classifier. We empirically demonstrate the effectiveness of our algorithm on the Caltech-256 image dataset.
Visual Contextual Advertising: Bringing Textual Advertisements to Images
Chen, Yuqiang (Shanghai Jiao Tong University) | Jin, Ou (Shanghai Jiao Tong University) | Xue, Gui-Rong (Shanghai Jiao Tong University) | Chen, Jia (Shanghai Jiao Tong University) | Yang, Qiang (Hong Kong University of Science and Technology)
Advertising in the case of textual Web pages has been studied extensively by many researchers. However, with the increasing amount of multimedia data such as image, audio and video on the Web, the need for recommending advertisement for the multimedia data is becoming a reality. In this paper, we address the novel problem of visual contextual advertising, which is to directly advertise when users are viewing images which do not have any surrounding text. A key challenging issue of visual contextual advertising is that images and advertisements are usually represented in image space and word space respectively, which are quite different with each other inherently. As a result, existing methods for Web page advertising are inapplicable since they represent both Web pages and advertisement in the same word space. In order to solve the problem, we propose to exploit the social Web to link these two feature spaces together. In particular, we present a unified generative model to integrate advertisements, words and images. Specifically, our solution combines two parts in a principled approach: First, we transform images from a image feature space to a word space utilizing the knowledge from images with annotations from social Web. Then, a language model based approach is applied to estimate the relevance between transformed images and advertisements. Moreover, in this model, the probability of recommending an advertisement can be inferred efficiently given an image, which enables potential applications to online advertising.