Goto

Collaborating Authors

Provable Model for Tensor Ring Completion

arXiv.org Machine Learning

Tensor is a natural way to represent the high-dimensional data, thus it preserves more intrinsic information than matrix when dealing with high-order data [1, 2, 3]. In practice, parts of the tensor entries are missing during data acquisition and transformation, tensor completion estimates the missing entries based on the assumption that most elements are correlated [4]. This correlation can be modeled as low-rank data structures which can be used in a series of applications, including signal processing [2], machine learning [5], remote sensing [6], computer vision [7], etc. There are two main frameworks for tensor completion, namely, variational energy minimization as well as tensor rank minimization [8, 9], where the energy is usually a recovery error in the context of tensor completion and the definition of rank varies with diverse tensor decompositions. The first method is realized by means of the alternating least square (ALS), in which each core tensor is updated one by one while others are fixed [8]. The ALSbased method requires a predefined tensor rank, while the rank minimization does not. Common forms of tensor decompositions are summarized as follows.


Tensor Grid Decomposition with Application to Tensor Completion

arXiv.org Machine Learning

The recently prevalent tensor train (TT) and tensor ring (TR) decompositions can be graphically interpreted as (locally) linear interconnected latent factors and possess exponential decay of correlation. The projected entangled pair state (PEPS, also called two-dimensional TT) extends the spatial dimension of TT and its polycyclic structure can be considered as a square grid. Compared with TT, its algebraic decay of correlation means the enhancement of interaction between tensor modes. In this paper we adopt the PEPS and develop a tensor grid (TG) decomposition with its efficient realization termed splitting singular value decomposition (SSVD). By utilizing the alternating least squares (ALS) a method called TG-ALS is used to interpolate the missing entries of a tensor from its partial observations. Different kinds of data are used in the experiments, including synthetic data, color images and real-world videos. Experimental results demonstrate that the TG has much power of representation than TT and TR.


Generalized Higher-Order Tensor Decomposition via Parallel ADMM

AAAI Conferences

Higher-order tensors are becoming prevalent in many scientific areas such as computer vision, social network analysis, data mining and neuroscience. Traditional tensor decomposition approaches face three major challenges: model selecting, gross corruptions and computational efficiency. To address these problems, we first propose a parallel trace norm regularized tensor decomposition method, and formulate it as a convex optimization problem. This mehtod does not require the rank of each mode to be specified beforehand, and can automaticaly determine the number of factors in each mode through our optimization scheme. By considering the low-rank structure of the observed tensor, we analyze the equivalent relationship of the trace norm between a low-rank tensor and its core tensor. Then, we cast a non-convex tensor decomposition model into a weighted combination of multiple much smaller-scale matrix trace norm minimization. Finally, we develop two parallel alternating direction methods of multipliers (ADMM) to solve our problems. Experimental results verify that our regularized formulation is effective, and our methods are robust to noise or outliers.


Generalized Higher-Order Orthogonal Iteration for Tensor Decomposition and Completion

Neural Information Processing Systems

Low-rank tensor estimation has been frequently applied in many real-world problems. Despite successful applications, existing Schatten 1-norm minimization (SNM) methods may become very slow or even not applicable for large-scale problems. To address this difficulty, we therefore propose an efficient and scalable core tensor Schatten 1-norm minimization method for simultaneous tensor decomposition and completion, with a much lower computational complexity. We first induce the equivalence relation of Schatten 1-norm of a low-rank tensor and its core tensor. Then the Schatten 1-norm of the core tensor is used to replace that of the whole tensor, which leads to a much smaller-scale matrix SNM problem. Finally, an efficient algorithm with a rank-increasing scheme is developed to solve the proposed problem with a convergence guarantee. Extensive experimental results show that our method is usually more accurate than the state-of-the-art methods, and is orders of magnitude faster.


Bayesian CP Factorization of Incomplete Tensors with Automatic Rank Determination

arXiv.org Machine Learning

CANDECOMP/PARAFAC (CP) tensor factorization of incomplete data is a powerful technique for tensor completion through explicitly capturing the multilinear latent factors. The existing CP algorithms require the tensor rank to be manually specified, however, the determination of tensor rank remains a challenging problem especially for CP rank. In addition, existing approaches do not take into account uncertainty information of latent factors, as well as missing entries. To address these issues, we formulate CP factorization using a hierarchical probabilistic model and employ a fully Bayesian treatment by incorporating a sparsity-inducing prior over multiple latent factors and the appropriate hyperpriors over all hyperparameters, resulting in automatic rank determination. To learn the model, we develop an efficient deterministic Bayesian inference algorithm, which scales linearly with data size. Our method is characterized as a tuning parameter-free approach, which can effectively infer underlying multilinear factors with a low-rank constraint, while also providing predictive distributions over missing entries. Extensive simulations on synthetic data illustrate the intrinsic capability of our method to recover the ground-truth of CP rank and prevent the overfitting problem, even when a large amount of entries are missing. Moreover, the results from real-world applications, including image inpainting and facial image synthesis, demonstrate that our method outperforms state-of-the-art approaches for both tensor factorization and tensor completion in terms of predictive performance.