Latecki, Longin Jan
Regularized Diffusion Process for Visual Retrieval
Bai, Song (Huazhong University of Science and Technology) | Bai, Xiang ( Huazhong University of Science and Technology ) | Tian, Qi (University of Texas at San Antonio) | Latecki, Longin Jan ( Temple University )
Diffusion process has advanced visual retrieval greatly owing to its capacity in capturing the geometry structure of the underlying manifold. Recent studies (Donoser and Bischof 2013) have experimentally demonstrated that diffusion process on the tensor product graph yields better retrieval performances than that on the original affinity graph. However, the principle behind this kind of diffusion process remains unclear, i.e., what kind of manifold structure is captured and how it is reflected. In this paper, we propose a new variant o diffusion process, which also operates on a tensor product graph. It is defined in three equivalent formulations (regularization framework, iterative framework and limit framework, respectively). Based on our study, three insightful conclusions are drawn which theoretically explain how this kind of diffusion process can better reveal the intrinsic relationship between objects. Besides, extensive experimental results on various retrieval tasks testify the validity of the proposed method.
Locality Preserving Projection for Domain Adaptation with Multi-Objective Learning
Shu, Le (Temple University) | Ma, Tianyang (Temple University) | Latecki, Longin Jan (Temple University)
In many practical cases, we need to generalize a model trained in a source domain to a new target domain.However, the distribution of these two domains may differ very significantly, especially sometimes some crucial target features may not have support in the source domain.This paper proposes a novel locality preserving projection method for domain adaptation task,which can find a linear mapping preserving the 'intrinsic structure' for both source and target domains.We first construct two graphs encoding the neighborhood information for source and target domains separately.We then find linear projection coefficients which have the property of locality preserving for each graph.Instead of combing the two objective terms under compatibility assumption and requiring the user to decide the importance of each objective function,we propose a multi-objective formulation for this problem and solve it simultaneously using Pareto optimization.The Pareto frontier captures all possible good linear projection coefficients that are preferred by one or more objectives.The effectiveness of our approach is justified by both theoretical analysis and empirical results on real world data sets.The new feature representation shows better prediction accuracy as our experiments demonstrate.
Size Adaptive Selection of Most Informative Features
Liu, Si (Chinese Academy of Science) | Liu, Hairong (National University of Singapore) | Latecki, Longin Jan (Temple University) | Yan, Shuicheng (National University of Singapore) | Xu, Changsheng (China-Singapore Institute of Digital Media) | Lu, Hanqing (Chinese Academy of Science)
In this paper, we propose a novel method to select the most informativesubset of features, which has little redundancy andvery strong discriminating power. Our proposed approach automaticallydetermines the optimal number of features and selectsthe best subset accordingly by maximizing the averagepairwise informativeness, thus has obvious advantage overtraditional filter methods. By relaxing the essential combinatorialoptimization problem into the standard quadratic programmingproblem, the most informative feature subset canbe obtained efficiently, and a strategy to dynamically computethe redundancy between feature pairs further greatly acceleratesour method through avoiding unnecessary computationsof mutual information. As shown by the extensive experiments,the proposed method can successfully select the mostinformative subset of features, and the obtained classificationresults significantly outperform the state-of-the-art results onmost test datasets.