Provable Model for Tensor Ring Completion

arXiv.org Machine Learning

Tensor is a natural way to represent the high-dimensional data, thus it preserves more intrinsic information than matrix when dealing with high-order data [1, 2, 3]. In practice, parts of the tensor entries are missing during data acquisition and transformation, tensor completion estimates the missing entries based on the assumption that most elements are correlated [4]. This correlation can be modeled as low-rank data structures which can be used in a series of applications, including signal processing [2], machine learning [5], remote sensing [6], computer vision [7], etc. There are two main frameworks for tensor completion, namely, variational energy minimization as well as tensor rank minimization [8, 9], where the energy is usually a recovery error in the context of tensor completion and the definition of rank varies with diverse tensor decompositions. The first method is realized by means of the alternating least square (ALS), in which each core tensor is updated one by one while others are fixed [8]. The ALSbased method requires a predefined tensor rank, while the rank minimization does not. Common forms of tensor decompositions are summarized as follows.


Tensor Grid Decomposition with Application to Tensor Completion

arXiv.org Machine Learning

The recently prevalent tensor train (TT) and tensor ring (TR) decompositions can be graphically interpreted as (locally) linear interconnected latent factors and possess exponential decay of correlation. The projected entangled pair state (PEPS, also called two-dimensional TT) extends the spatial dimension of TT and its polycyclic structure can be considered as a square grid. Compared with TT, its algebraic decay of correlation means the enhancement of interaction between tensor modes. In this paper we adopt the PEPS and develop a tensor grid (TG) decomposition with its efficient realization termed splitting singular value decomposition (SSVD). By utilizing the alternating least squares (ALS) a method called TG-ALS is used to interpolate the missing entries of a tensor from its partial observations. Different kinds of data are used in the experiments, including synthetic data, color images and real-world videos. Experimental results demonstrate that the TG has much power of representation than TT and TR.


Generalized Higher-Order Tensor Decomposition via Parallel ADMM

AAAI Conferences

Higher-order tensors are becoming prevalent in many scientific areas such as computer vision, social network analysis, data mining and neuroscience. Traditional tensor decomposition approaches face three major challenges: model selecting, gross corruptions and computational efficiency. To address these problems, we first propose a parallel trace norm regularized tensor decomposition method, and formulate it as a convex optimization problem. This mehtod does not require the rank of each mode to be specified beforehand, and can automaticaly determine the number of factors in each mode through our optimization scheme. By considering the low-rank structure of the observed tensor, we analyze the equivalent relationship of the trace norm between a low-rank tensor and its core tensor. Then, we cast a non-convex tensor decomposition model into a weighted combination of multiple much smaller-scale matrix trace norm minimization. Finally, we develop two parallel alternating direction methods of multipliers (ADMM) to solve our problems. Experimental results verify that our regularized formulation is effective, and our methods are robust to noise or outliers.


Generalized Higher-Order Orthogonal Iteration for Tensor Decomposition and Completion

Neural Information Processing Systems

Low-rank tensor estimation has been frequently applied in many real-world problems. Despite successful applications, existing Schatten 1-norm minimization (SNM) methods may become very slow or even not applicable for large-scale problems. To address this difficulty, we therefore propose an efficient and scalable core tensor Schatten 1-norm minimization method for simultaneous tensor decomposition and completion, with a much lower computational complexity. We first induce the equivalence relation of Schatten 1-norm of a low-rank tensor and its core tensor. Then the Schatten 1-norm of the core tensor is used to replace that of the whole tensor, which leads to a much smaller-scale matrix SNM problem. Finally, an efficient algorithm with a rank-increasing scheme is developed to solve the proposed problem with a convergence guarantee. Extensive experimental results show that our method is usually more accurate than the state-of-the-art methods, and is orders of magnitude faster.


A Dual Framework for Low-rank Tensor Completion

Neural Information Processing Systems

One of the popular approaches for low-rank tensor completion is to use the latent trace norm regularization. However, most existing works in this direction learn a sparse combination of tensors. In this work, we fill this gap by proposing a variant of the latent trace norm that helps in learning a non-sparse combination of tensors. We develop a dual framework for solving the low-rank tensor completion problem. We first show a novel characterization of the dual solution space with an interesting factorization of the optimal solution. Overall, the optimal solution is shown to lie on a Cartesian product of Riemannian manifolds. Furthermore, we exploit the versatile Riemannian optimization framework for proposing computationally efficient trust region algorithm. The experiments illustrate the efficacy of the proposed algorithm on several real-world datasets across applications.