statistical efficiency and scalable algorithm
On Tensor Train Rank Minimization : Statistical Efficiency and Scalable Algorithm
Tensor train (TT) decomposition provides a space-efficient representation for higher-order tensors. Despite its advantage, we face two crucial limitations when we apply the TT decomposition to machine learning problems: the lack of statistical theory and of scalable algorithms. In this paper, we address the limitations. First, we introduce a convex relaxation of the TT decomposition problem and derive its error bound for the tensor completion task. Next, we develop a randomized optimization method, in which the time complexity is as efficient as the space complexity is. In experiments, we numerically confirm the derived bounds and empirically demonstrate the performance of our method with a real higher-order tensor.
Reviews: On Tensor Train Rank Minimization : Statistical Efficiency and Scalable Algorithm
The authors propose two algorithms for fitting low-rank tensor-train (TT) decompositions using the Schatten TT norm as the low-rank inducing regularizer. The first (TT-ADMM) uses ADMM with the optimization variable being the tensor itself--- this has exponential space and time complexities. The second (TT-RALS) reduces these complexities to polynomial by optimizing in an alternating fashion (using ADMM) over the factors in a TT factorization of the tensor, and using randomized dimensionality reduction to estimate the Schatten TT norm. Theoretical analysis is provided to argue that: under some regularity conditions (incoherence) the matrix estimate obtained by TT-ADMM is consistent, and if the initial guess is close enough to the optimal point, TT-RALS is also consistent. Quantitative rates of convergence are provided.
On Tensor Train Rank Minimization : Statistical Efficiency and Scalable Algorithm
Imaizumi, Masaaki, Maehara, Takanori, Hayashi, Kohei
Tensor train (TT) decomposition provides a space-efficient representation for higher-order tensors. Despite its advantage, we face two crucial limitations when we apply the TT decomposition to machine learning problems: the lack of statistical theory and of scalable algorithms. In this paper, we address the limitations. First, we introduce a convex relaxation of the TT decomposition problem and derive its error bound for the tensor completion task. Next, we develop a randomized optimization method, in which the time complexity is as efficient as the space complexity is.
- Information Technology > Artificial Intelligence > Machine Learning (0.98)
- Information Technology > Data Science > Data Mining > Big Data (0.68)