Goto

Collaborating Authors

 The Hong Kong Polytechnic University


Unsupervised Measure of Word Similarity: How to Outperform Co-Occurrence and Vector Cosine in VSMs

AAAI Conferences

In this paper, we claim that vector cosine โ€“ which is generally considered among the most efficient unsupervised measures for identifying word similarity in Vector Space Models โ€“ can be outperformed by an unsupervised measure that calculates the extent of the intersection among the most mutually dependent contexts of the target words. To prove it, we describe and evaluate APSyn, a variant of the Average Precision that, without any optimization, outperforms the vector cosine and the co-occurrence on the standard ESL test set, with an improvement ranging between +9.00% and +17.98%, depending on the number of chosen top contexts.


Modeling Quantum Entanglements in Quantum Language Models

AAAI Conferences

Recently, a Quantum Language Model (QLM) was proposed to model term dependencies upon Quantum Theory (QT) framework and successively applied in Information Retrieval (IR). Nevertheless, QLM's dependency is based on co-occurrences of terms and has not yet taken into account the Quantum Entanglement (QE), which is a key quantum concept and has a significant cognitive implication. In QT, an entangled state can provide a more complete description for the nature of realities, and determine intrinsic correlations of considered objects globally, rather than those co-occurrences on the surface. It is, however, a real challenge to decide and measure QE using the classical statistics of texts in a post-measurement configuration. In order to circumvent this problem, we theoretically prove the connection between QE and statistically Unconditional Pure Dependence (UPD). Since UPD has an implementable deciding algorithm, we can in turn characterize QE by extracting the UPD patterns from texts. This leads to a measurable QE, based on which we further advance the existing QLM framework. We empirically compare our model with related models, and the results demonstrate the effectiveness of our model.


Video Saliency Detection via Dynamic Consistent Spatio-Temporal Attention Modelling

AAAI Conferences

Human vision system actively seeks salient regions and movements in video sequences to reduce the search effort. Modeling computational visual saliency map provides im-portant information for semantic understanding in many real world applications. In this paper, we propose a novel video saliency detection model for detecting the attended regions that correspond to both interesting objects and dominant motions in video sequences. In spatial saliency map, we in-herit the classical bottom-up spatial saliency map. In tem-poral saliency map, a novel optical flow model is proposed based on the dynamic consistency of motion. The spatial and the temporal saliency maps are constructed and further fused together to create a novel attention model. The pro-posed attention model is evaluated on three video datasets. Empirical validations demonstrate the salient regions de-tected by our dynamic consistent saliency map highlight the interesting objects effectively and efficiency. More im-portantly, the automatically video attended regions detected by proposed attention model are consistent with the ground truth saliency maps of eye movement data.


A Cyclic Weighted Median Method for L1 Low-Rank Matrix Factorization with Missing Entries

AAAI Conferences

A challenging problem in machine learning, information retrieval and computer vision research is how to recover a low-rank representation of the given data in the presence of outliers and missing entries. The L1-norm low-rank matrix factorization (LRMF) has been a popular approach to solving this problem. However, L1-norm LRMF is difficult to achieve due to its non-convexity and non-smoothness, and existing methods are often inefficient and fail to converge to a desired solution. In this paper we propose a novel cyclic weighted median (CWM) method, which is intrinsically a coordinate decent algorithm, for L1-norm LRMF. The CWM method minimizes the objective by solving a sequence of scalar minimization sub-problems, each of which is convex and can be easily solved by the weighted median filter. The extensive experimental results validate that the CWM method outperforms state-of-the-arts in terms of both accuracy and computational efficiency.


Query-Oriented Multi-Document Summarization via Unsupervised Deep Learning

AAAI Conferences

Extractive style query oriented multi document summariza tion generates the summary by extracting a proper set of sentences from multiple documents based on the pre given query. This paper proposes a novel multi document summa rization framework via deep learning model. This uniform framework consists of three parts: concepts extraction, summary generation, and reconstruction validation, which work together to achieve the largest coverage of the docu ments content. A new query oriented extraction technique is proposed to concentrate distributed information to hidden units layer by layer. Then, the whole deep architecture is fi ne tuned by minimizing the information loss of reconstruc tion validation. According to the concentrated information, dynamic programming is used to seek most informative set of sentences as the summary. Experiments on three bench mark datasets demonstrate the effectiveness of the proposed framework and algorithms.


Generating Coherent Summaries with Textual Aspects

AAAI Conferences

Initiated by TAC 2010, aspect-guided summaries not only address specific user need, but also ameliorate content-level coherence by using aspect information. This paper presents a full-fledged system composed of three modules: finding sentence-level textual aspects, modeling aspect-based coherence with an HMM model, and selecting and ordering sentences with aspect information to generate coherent summaries. The evaluation results on the TAC 2011 datasets show the superiority of aspect-guided summaries in terms of both information coverage and textual coherence.


What Are Tweeters Doing: Recognizing Speech Acts in Twitter

AAAI Conferences

Speech acts provide good insights into the communicative behavior of tweeters on Twitter. This paper is mainly concerned with speech act recognition in Twitter as a multi-class classification problem, for which we propose a set of word-based and character-based features. Inexpensive, robust and efficient, our method achieves an average F1 score of nearly 0.7 with the existence of much noise in our annotated Twitter data. In view of the deficiency of training data for the task, we experimented extensively with different configurations of training and test data, leading to empirical findings that may provide valuable reference for building benchmark datasets for sustained research on speech act recognition in Twitter.


Ordinal Regression via Manifold Learning

AAAI Conferences

Ordinal regression is an important research topic in machine learning. It aims to automatically determine the implied rating of a data item on a fixed, discrete rating scale. In this paper, we present a novel ordinal regression approach via manifold learning, which is capable of uncovering the embedded nonlinear structure of the data set according to the observations in the highdimensional feature space. By optimizing the order information of the observations and preserving the intrinsic geometry of the data set simultaneously, the proposed algorithm provides the faithful ordinal regression to the new coming data points. To offer more general solution to the data with natural tensor structure, we further introduce the multilinear extension of the proposed algorithm, which can support the ordinal regression of high order data like images. Experiments on various data sets validate the effectiveness of the proposed algorithm as well as its extension.


Multilinear Maximum Distance Embedding Via L1-Norm Optimization

AAAI Conferences

Dimensionality reduction plays an important role in many machine learning and pattern recognition tasks. In this paper, we present a novel dimensionality reduction algorithm called multilinear maximum distance embedding (M2DE), which includes three key components. To preserve the local geometry and discriminant information in the embedded space, M2DE utilizes a new objective function, which aims to maximize the distances between some particular pairs of data points, such as the distances between nearby points and the distances between data points from different classes. To make the mapping of new data points straightforward, and more importantly, to keep the natural tensor structure of high-order data, M2DE integrates multilinear techniques to learn the transformation matrices sequentially. To provide reasonable and stable embedding results, M2DE employs the L1-norm, which is more robust to outliers, to measure the dissimilarity between data points. Experiments on various datasets demonstrate that M2DE achieves good embedding results of high-order data for classification tasks.