Goto

Collaborating Authors

Exact Recovery of Low-rank Tensor Decomposition under Reshuffling

arXiv.org Machine Learning

Low-rank tensor decomposition is a promising approach for analysis and understanding of real-world data. Many such analyses require correct recovery of the true latent factors, but the conditions of exact recovery are not known for many existing tensor decomposition methods. In this paper, we derive such conditions for a general class of tensor decomposition methods where each latent tensor component can be reshuffled into a low-rank matrix of arbitrary shape. The reshuffling operation generalizes the traditional unfolding operation, and provides flexibility to recover true latent factors of complex data-structures. We prove that exact recovery can be guaranteed by using a convex program when a type of incoherence measure is upper bounded. The results on image steganography show that our method obtains the state-of-the-art performance. The theoretical analysis in this paper is expected to be useful to derive similar results for other types of tensor-decomposition methods.


Robust Tensor Recovery using Low-Rank Tensor Ring

arXiv.org Machine Learning

Robust tensor completion recoveries the low-rank and sparse parts from its partially observed entries. In this paper, we propose the robust tensor ring completion (RTRC) model and rigorously analyze its exact recovery guarantee via TR-unfolding scheme, and the result is consistent with that of matrix case. We propose the algorithms for tensor ring robust principle component analysis (TRRPCA) and RTCR using the alternating direction method of multipliers (ADMM). The numerical experiment demonstrates that the proposed method outperforms the state-of-the-art ones in terms of recovery accuracy.


Optimal low rank tensor recovery

arXiv.org Machine Learning

We investigate the sample size requirement for exact recovery of a high order tensor of low rank from a subset of its entries. In the Tucker decomposition framework, we show that the Riemannian optimization algorithm with initial value obtained from a spectral method can reconstruct a tensor of size $n\times n \times\cdots \times n$ tensor of ranks $(r,\cdots,r)$ with high probability from as few as $O((r^d+dnr)\log(d))$ entries. In the case of order 3 tensor, the entries can be asymptotically as few as $O(nr)$ for a low rank large tensor. We show the theoretical guarantee condition for the recovery. The analysis relies on the tensor restricted isometry property (tensor RIP) and the curvature of the low rank tensor manifold. Our algorithm is computationally efficient and easy to implement. Numerical results verify that the algorithms are able to recover a low rank tensor from minimum number of measurements. The experiments on hyperspectral images recovery also show that our algorithm is capable of real world signal processing problems.


On Polynomial Time Methods for Exact Low Rank Tensor Completion

arXiv.org Machine Learning

In this paper, we investigate the sample size requirement for exact recovery of a high order tensor of low rank from a subset of its entries. We show that a gradient descent algorithm with initial value obtained from a spectral method can, in particular, reconstruct a ${d\times d\times d}$ tensor of multilinear ranks $(r,r,r)$ with high probability from as few as $O(r^{7/2}d^{3/2}\log^{7/2}d+r^7d\log^6d)$ entries. In the case when the ranks $r=O(1)$, our sample size requirement matches those for nuclear norm minimization (Yuan and Zhang, 2016a), or alternating least squares assuming orthogonal decomposability (Jain and Oh, 2014). Unlike these earlier approaches, however, our method is efficient to compute, easy to implement, and does not impose extra structures on the tensor. Numerical results are presented to further demonstrate the merits of the proposed approach.


Exact tensor completion using t-SVD

arXiv.org Machine Learning

In this paper we focus on the problem of completion of multidimensional arrays (also referred to as tensors) from limited sampling. Our approach is based on a recently proposed tensor-Singular Value Decomposition (t-SVD) [1]. Using this factorization one can derive notion of tensor rank, referred to as the tensor tubal rank, which has optimality properties similar to that of matrix rank derived from SVD. As shown in [2] some multidimensional data, such as panning video sequences exhibit low tensor tubal rank and we look at the problem of completing such data under random sampling of the data cube. We show that by solving a convex optimization problem, which minimizes the tensor nuclear norm obtained as the convex relaxation of tensor tubal rank, one can guarantee recovery with overwhelming probability as long as samples in proportion to the degrees of freedom in t-SVD are observed. In this sense our results are order-wise optimal. The conditions under which this result holds are very similar to the incoherency conditions for the matrix completion, albeit we define incoherency under the algebraic set-up of t-SVD. We show the performance of the algorithm on some real data sets and compare it with other existing approaches based on tensor flattening and Tucker decomposition.