On Tensor Train Rank Minimization : Statistical Efficiency and Scalable Algorithm
Imaizumi, Masaaki, Maehara, Takanori, Hayashi, Kohei
–Neural Information Processing Systems
Tensor train (TT) decomposition provides a space-efficient representation for higher-order tensors. Despite its advantage, we face two crucial limitations when we apply the TT decomposition to machine learning problems: the lack of statistical theory and of scalable algorithms. In this paper, we address the limitations. First, we introduce a convex relaxation of the TT decomposition problem and derive its error bound for the tensor completion task. Next, we develop a randomized optimization method, in which the time complexity is as efficient as the space complexity is.
higher-order tensor, statistical efficiency and scalable algorithm, tensor train rank minimization, (3 more...)
Neural Information Processing Systems
Feb-14-2020, 14:12:19 GMT