Tensor Completion Algorithms in Big Data Analytics

arXiv.org Machine Learning

Tensor completion is a problem of filling the missing or unobserved entries of partially observed tensors. Due to the multidimensional character of tensors in describing complex datasets, tensor completion algorithms and their applications have received wide attention and achievement in data mining, computer vision, signal processing, and neuroscience, etc. In this survey, we provide a modern overview of recent advances in tensor completion algorithms from the perspective of big data analytics characterized by diverse variety, large volume, and high velocity. Towards a better comprehension and comparison of vast existing advances, we summarize and categorize them into four groups including general tensor completion algorithms, tensor completion with auxiliary information (variety), scalable tensor completion algorithms (volume) and dynamic tensor completion algorithms (velocity). Besides, we introduce their applications on real-world data-driven problems and present an open-source package covering several widely used tensor decomposition and completion algorithms. Our goal is to summarize these popular methods and introduce them to researchers for promoting the research process in this field and give an available repository for practitioners. In the end, we also discuss some challenges and promising research directions in this community for future explorations.


On Tensor Completion via Nuclear Norm Minimization

arXiv.org Machine Learning

Many problems can be formulated as recovering a low-rank tensor. Although an increasingly common task, tensor recovery remains a challenging problem because of the delicacy associated with the decomposition of higher order tensors. To overcome these difficulties, existing approaches often proceed by unfolding tensors into matrices and then apply techniques for matrix completion. We show here that such matricization fails to exploit the tensor structure and may lead to suboptimal procedure. More specifically, we investigate a convex optimization approach to tensor completion by directly minimizing a tensor nuclear norm and prove that this leads to an improved sample size requirement. To establish our results, we develop a series of algebraic and probabilistic techniques such as characterization of subdifferetial for tensor nuclear norm and concentration inequalities for tensor martingales, which may be of independent interests and could be useful in other tensor related problems.


Provable Model for Tensor Ring Completion

arXiv.org Machine Learning

Tensor is a natural way to represent the high-dimensional data, thus it preserves more intrinsic information than matrix when dealing with high-order data [1, 2, 3]. In practice, parts of the tensor entries are missing during data acquisition and transformation, tensor completion estimates the missing entries based on the assumption that most elements are correlated [4]. This correlation can be modeled as low-rank data structures which can be used in a series of applications, including signal processing [2], machine learning [5], remote sensing [6], computer vision [7], etc. There are two main frameworks for tensor completion, namely, variational energy minimization as well as tensor rank minimization [8, 9], where the energy is usually a recovery error in the context of tensor completion and the definition of rank varies with diverse tensor decompositions. The first method is realized by means of the alternating least square (ALS), in which each core tensor is updated one by one while others are fixed [8]. The ALSbased method requires a predefined tensor rank, while the rank minimization does not. Common forms of tensor decompositions are summarized as follows.


Efficient Sparse Low-Rank Tensor Completion Using the Frank-Wolfe Algorithm

AAAI Conferences

Most tensor problems are NP-hard, and low-rank tensor completion is much more difficult than low-rank matrix completion. In this paper, we propose a time and space-efficient low-rank tensor completion algorithm by using the scaled latent nuclear norm for regularization and the Frank-Wolfe (FW) algorithm for optimization. We show that all the steps can be performed efficiently. In particular,FW's linear subproblem has a closed-form solution which can be obtained from rank-one SVD. By utilizing sparsity of the observed tensor,we only need to maintain sparse tensors and a set of small basis matrices. Experimental results show that the proposed algorithm is more accurate, much faster and more scalable than the state-of-the-art.


Scaled Nuclear Norm Minimization for Low-Rank Tensor Completion

arXiv.org Machine Learning

Minimizing the nuclear norm of a matrix has been shown to be very efficient in reconstructing a low-rank sampled matrix. Furthermore, minimizing the sum of nuclear norms of matricizations of a tensor has been shown to be very efficient in recovering a low-Tucker-rank sampled tensor. In this paper, we propose to recover a low-TT-rank sampled tensor by minimizing a weighted sum of nuclear norms of unfoldings of the tensor. We provide numerical results to show that our proposed method requires significantly less number of samples to recover to the original tensor in comparison with simply minimizing the sum of nuclear norms since the structure of the unfoldings in the TT tensor model is fundamentally different from that of matricizations in the Tucker tensor model.