Wedge Sampling: Efficient Tensor Completion with Nearly-Linear Sample Complexity

Luo, Hengrui, Ma, Anna, Stephan, Ludovic, Zhu, Yizhe

arXiv.org Machine Learning 

Matrix completion studies the problem of reconstructing a matrix from a (typically random) subset of its entries by exploiting prior structural assumptions such as low rank and incoherence. Roughly speaking, when the underlying n n matrix has low rank and its eigenvectors are sufficiently incoherent, observing Ω(n log n) entries sampled uniformly at random is sufficient for exact recovery via efficient optimization methods [Keshavan et al., 2009, 2010, Candes and Tao, 2010, Candes and Plan, 2010, Recht, 2011, Candes and Recht, 2012, Jain et al., 2013]. This sample complexity is nearly optimal, since specifying a rank-r matrix requires only O(n) degrees of freedom. Tensor completion generalizes this problem to higher-order arrays, aiming to recover a low-rank tensor from a limited set of observed entries, for example, under uniform random sampling. As a natural higher-order analogue of matrix completion, tensor completion has found broad applications in areas such as recommendation systems [Frolov and Oseledets, 2017], signal and image processing [Govindu, 2005, Liu et al., 2012], and data science [Song et al., 2019]. Despite this close analogy, tensor completion behaves fundamentally differently from its matrix counterpart. In contrast to the classical matrix setting, tensor completion exhibits a pronounced trade-off between computational and statistical complexity: while information-theoretic considerations suggest that relatively few samples suffice for recovery, all currently known polynomial-time algorithms require substantially more observations than this optimal limit. Polynomial-time methods A widely used polynomial-time approach to tensor completion is to reduce the problem to matrix completion via matricization.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found