Large Scale Local Online Similarity/Distance Learning Framework based on Passive/Aggressive

arXiv.org Machine Learning

Similarity/Distance measures play a key role in many machine learning, pattern recognition, and data mining algorithms, which leads to the emergence of metric learning field. Many metric learning algorithms learn a global distance function from data that satisfy the constraints of the problem. However, in many real-world datasets that the discrimination power of features varies in the different regions of input space, a global metric is often unable to capture the complexity of the task. To address this challenge, local metric learning methods are proposed that learn multiple metrics across the different regions of input space. Some advantages of these methods are high flexibility and the ability to learn a nonlinear mapping but typically achieves at the expense of higher time requirement and overfitting problem. To overcome these challenges, this research presents an online multiple metric learning framework. Each metric in the proposed framework is composed of a global and a local component learned simultaneously. Adding a global component to a local metric efficiently reduce the problem of overfitting. The proposed framework is also scalable with both sample size and the dimension of input data. To the best of our knowledge, this is the first local online similarity/distance learning framework based on PA (Passive/Aggressive). In addition, for scalability with the dimension of input data, DRP (Dual Random Projection) is extended for local online learning in the present work. It enables our methods to be run efficiently on high-dimensional datasets, while maintains their predictive performance. The proposed framework provides a straightforward local extension to any global online similarity/distance learning algorithm based on PA.


Supervised Online Hashing via Similarity Distribution Learning

arXiv.org Artificial Intelligence

Hashing based visual search has attracted extensive research Online hashing has attracted extensive research attention attention in recent years due to the rapid growth of when facing streaming data. Most online hashing visual data on the Internet [7, 33, 8, 26, 12, 13, 30, 32, 25, methods, learning binary codes based on pairwise similarities 35, 27]. In various scenarios, online hashing has become of training instances, fail to capture the semantic relationship, a hot topic due to the emergence of handling the streaming and suffer from a poor generalization in largescale data, which aims to resolve an online retrieval task by applications due to large variations. In this paper, we updating the hash functions from sequentially arriving data propose to model the similarity distributions between the input instances. On one hand, online hashing takes advantages data and the hashing codes, upon which a novel supervised of traditional offline hashing methods, i.e., low storage cost online hashing method, dubbed as Similarity Distribution and efficiency of pairwise distance computation in the Hamming based Online Hashing (SDOH), is proposed, to keep space. On the other hand, it also merits in training the intrinsic semantic relationship in the produced Hamming efficiency and scalability for large-scale applications, since space. Specifically, we first transform the discrete the hash functions are updated instantly and solely based on similarity matrix into a probability matrix via a Gaussianbased the current streaming data, which is superior to traditional normalization to address the extremely imbalanced hashing methods based on a hashing model entirely trained distribution issue. And then, we introduce a scaling Student from scratch.


Learning Relative Similarity by Stochastic Dual Coordinate Ascent

AAAI Conferences

Learning relative similarity from pairwise instances is an important problem in machine learning and has a wide range of applications. Despite being studied for years, some existing methods solved by Stochastic Gradient Descent (SGD) techniques generally suffer from slow convergence. In this paper, we investigate the application of Stochastic Dual Coordinate Ascent (SDCA) technique to tackle the optimization task of relative similarity learning by extending from vector to matrix parameters. Theoretically, we prove the optimal linear convergence rate for the proposed SDCA algorithm, beating the well-known sublinear convergence rate by the previous best metric learning algorithms. Empirically, we conduct extensive experiments on both standard and large-scale data sets to validate the effectiveness of the proposed algorithm for retrieval tasks.


Structural Sentence Similarity Estimation for Short Texts

AAAI Conferences

Sentence similarity is the basis of most text-related tasks. In this paper, we define a new task of sentence similarity estimation specifically for short while informal, social-network styled sentences. The new type of sentence similarity, which we call Structural Similarity, eliminates syntactic or grammatical features such as dependency paths and Part-of-Speech (POS) tagging which do not have enough representativeness on short sentences. Structural Similarity does not consider actual meanings of the sentences either but puts more emphasis on the similarities of sentence structures, so as to discover purpose- or emotion-level similarities. The idea is based on the observation that people tend to use sentences with similar structures to express similar feelings. Besides the definition, we present a new feature set and a mechanism to calculate the scores, and, for the needs of disambiguating word senses we propose a variant of the Word2Vec model to represent words. We prove the correctness and advancement of our sentence similarity measurement by experiments.


Corpus-basedand Knowledge-based Measures of Text Semantic Similarity

AAAI Conferences

This paper presents a method for measuring the semantic similarity of texts, using corpus-based and knowledge-based measures of similarity. Previous work on this problem has focused mainly on either large documents (e.g.