McGreggor, Keith (Georgia Institute of Technology) | Goel, Ashok (Georgia Institute of Technology)

We report a novel approach to addressing the Raven’s Progressive Matrices (RPM) tests, one based upon purely visual representations. Our technique introduces the calculation of confidence in an answer and the automatic adjustment of level of resolution if that confidence is insufficient. We first describe the nature of the visual analogies found on the RPM. We then exhibit our algorithm and work through a detailed example. Finally, we present the performance of our algorithm on the four major variants of the RPM tests, illustrating the impact of confidence. This is the first such account of any computational model against the entirety of the Raven’s.

Zhu, Xiaofeng (Guangxi Normal University) | He, Wei (Guangxi Normal University) | Li, Yonggang (Guangxi Normal University) | Yang, Yang ( University of Electronic Science and Technology of China ) | Zhang, Shichao (Guangxi Normal University) | Hu, Rongyao (Guangxi Normal University) | Zhu, Yonghua (Guangxi University)

This paper proposes a one-step spectral clustering method by learning an intrinsic affinity matrix (i.e., the clustering result) from the low-dimensional space (i.e., intrinsic subspace) of original data. Specifically, the intrinsic affinitymatrix is learnt by: 1) the alignment of the initial affinity matrix learnt from original data; 2) the adjustment of the transformation matrix, which transfers the original feature space into its intrinsic subspace by simultaneously conducting feature selection and subspace learning; and 3) the clustering result constraint, i.e., the graph constructed by the intrinsic affinity matrix has exact c connected components where c is the number of clusters. In this way, two affinity matrices and a transformation matrix are iteratively updated until achieving their individual optimum, so that these two affinity matrices are consistent and the intrinsic subspace is learnt via the transformation matrix. Experimental results on both synthetic and benchmark datasets verified that our proposed method outputted more effective clustering result than the previous clustering methods.

Zhu, Xiaofeng (Guangxi Normal University) | He, Wei (Guangxi Normal University) | Li, Yonggang (Guangxi Normal University) | Yang, Yang ( University of Electronic Science and Technology of China ) | Zhang, Shichao (Guangxi Normal University) | Hu, Rongyao (Guangxi Normal University) | Zhu, Yonghua (Guangxi University)

Shi, Xiaoshuang (University of Florida) | Xing, Fuyong (University of Florida) | Xu, Kaidi (University of Florida) | Sapkota, Manish (University of Florida) | Yang, Lin (University of Florida)

Recently, many graph based hashing methods have been emerged to tackle large-scale problems. However, there exists two major bottlenecks: (1) directly learning discrete hashing codes is an NP-hardoptimization problem; (2) the complexity of both storage and computational time to build a graph with n data points is O(n2). To address these two problems, in this paper, we propose a novel yetsimple supervised graph based hashing method, asymmetric discrete graph hashing, by preserving the asymmetric discrete constraint and building an asymmetric affinity matrix to learn compact binary codes.Specifically, we utilize two different instead of identical discrete matrices to better preserve the similarity of the graph with short binary codes. We generate the asymmetric affinity matrix using m (m n) selected anchors to approximate the similarity among all training data so that computational time and storage requirement can be significantly improved. In addition, the proposed method jointly learns discrete binary codes and a low-dimensional projection matrix to further improve the retrieval accuracy. Extensive experiments on three benchmark large-scale databases demonstrate its superior performance over the recent state of the arts with lower training time costs.