Goto

Collaborating Authors

Linear Algebra 101 -- Part 9: Singular Value Decomposition (SVD)

#artificialintelligence

Singular Value Decomposition (SVD) is another type of decomposition. Unlike eigendecomposition where the matrix you want to decompose has to be a square matrix, SVD allows you to decompose a rectangular matrix (a matrix that has different numbers of rows and columns). This is often more useful in a real-life scenario since the rectangular matrix could represent a wide variety of data that's not a square matrix. First, let's look at the definition itself. As you can see, SVD decomposes the matrix into 3 different matrices.


Characteristic matrix of covering and its application to boolean matrix decomposition and axiomatization

arXiv.org Artificial Intelligence

Covering is an important type of data structure while covering-based rough sets provide an efficient and systematic theory to deal with covering data. In this paper, we use boolean matrices to represent and axiomatize three types of covering approximation operators. First, we define two types of characteristic matrices of a covering which are essentially square boolean ones, and their properties are studied. Through the characteristic matrices, three important types of covering approximation operators are concisely equivalently represented. Second, matrix representations of covering approximation operators are used in boolean matrix decomposition. We provide a sufficient and necessary condition for a square boolean matrix to decompose into the boolean product of another one and its transpose. And we develop an algorithm for this boolean matrix decomposition. Finally, based on the above results, these three types of covering approximation operators are axiomatized using boolean matrices. In a word, this work borrows extensively from boolean matrices and present a new view to study covering-based rough sets.


Minimax Estimation of Bandable Precision Matrices

Neural Information Processing Systems

The inverse covariance matrix provides considerable insight for understanding statistical models in the multivariate setting. In particular, when the distribution over variables is assumed to be multivariate normal, the sparsity pattern in the inverse covariance matrix, commonly referred to as the precision matrix, corresponds to the adjacency matrix representation of the Gauss-Markov graph, which encodes conditional independence statements between variables. Minimax results under the spectral norm have previously been established for covariance matrices, both sparse and banded, and for sparse precision matrices. We establish minimax estimation bounds for estimating banded precision matrices under the spectral norm. Our results greatly improve upon the existing bounds; in particular, we find that the minimax rate for estimating banded precision matrices matches that of estimating banded covariance matrices.