Collaborating Authors

Non-Negative Matrix Factorization with Constraints

AAAI Conferences

Non-negative matrix factorization (NMF), as a useful decomposition method for multivariate data, has been widely used in pattern recognition, information retrieval and computer vision. NMF is an effective algorithm to find the latent structure of the data and leads to a parts-based representation. However, NMF is essentially an unsupervised method and can not make use of label information. In this paper, we propose a novel semi-supervised matrix decomposition method, called Constrained Non-negative Matrix Factorization, which takes the label information as additional constraints. Specifically, we require that the data points sharing the same label have the same coordinate in the new representation space. This way, the learned representations can have more discriminating power. We demonstrate the effectiveness of this novel algorithm through a set of evaluations on real world applications.

Deep Self-representative Concept Factorization Network for Representation Learning Machine Learning

-- In this paper, we investigate the unsupervised deep representation learning issue and technically propose a novel framework called Deep Self - representative Concept Factorization Network (DSCF - Net), for clustering deep feature s . To improve the representation and clustering abilities, DSCF - Net explicitl y considers discovering hidden deep semantic features, enhancing the robustness properties of the deep factorization to noise and preserving the local manifold structures of deep features. To discover hidden deep representations, DSCF - Net designs a hierarchical factorization architecture using multiple layers of li near transformations, where the hierarchical representation is performed by formulating the problem as optimizing the basis concepts in each layer to improve the representation indirectly. DSCF - Net also improves the robustness by subspace recovery for spar se error correction firstly and then performs the deep factorization in the recovered visual subspace. To obtain locality - preserving representations, we also present an adaptive deep self - representative weighting strategy by using the coefficient matrix as the adaptive reconstruction weights to keep the locality of representation s . Extensive comparison results with several other related models show that DSCF - Net delivers state - of - the - art performance on several public databases. R epresentation learning from h igh - dimensional complex data is always an important and fundamental problem in the fields of pattern recognition an d data mining [40 - 50 ] .

Matrix Factorization, Random Features, Hashing Kernels roundup


In this work, we propose an Augmented-Lagrangian with Block Coordinate Descent (AL-BCD) algorithm that utilizes problem structure to obtain closed-form solution for each block sub-problem, and exploits low-rank representation of the dissimilarity matrix to search active columns without computing the entire matrix. Experiments show our approach to be orders of magnitude faster than existing approaches and can handle problems of up to 106 samples. We also demonstrate successful applications of the algorithm on world-scale facility location, document summarization and active learning. This paper introduces the interpolative butterfly factorization for nearly optimal implementation of several transforms in harmonic analysis, when their explicit formulas satisfy certain analytic properties and the matrix representations of these transforms satisfy a complementary low-rank property. A preliminary interpolative butterfly factorization is constructed based on interpolative low-rank approximations of the complementary low-rank matrix.

Non Negative Matrix Factorization


The purpose of this post is to give a simple explanation of a powerful feature extraction technique, non-negative matrix factorization.

Integrating Representation Learning and Temporal Difference Learning: A Matrix Factorization Approach

AAAI Conferences

Reinforcement learning is a general formalism for sequential decision-making, with recent algorithm development focusing on function approximation to handle large state spaces and high-dimensional, high-velocity (sensor) data. The success of function approximators, however, hinges on the quality of the data representation. In this work, we explore representation learning within least-squares temporal difference learning (LSTD), with a focus on making the assumptions on the representation explicit and making the learning problem amenable to principled optimization techniques. We reformulate LSTD as a least-squares loss plus concave regularizer, facilitating the addition of a regularized matrix factorization objective to specify the desired class of representations. The resulting joint optimization over the representation and value function parameters enables us to take advantages of recent advances in unsupervised learning and presents a general yet simple formalism for learning representations in reinforcement learning.