over-parameterized tensor decomposition
Beyond Lazy Training for Over-parameterized Tensor Decomposition
Over-parametrization is an important technique in training neural networks. In both theory and practice, training a larger network allows the optimization algorithm to avoid bad local optimal solutions. In this paper we study a closely related tensor decomposition problem: given an $l$-th order tensor in $(R^d)^{\otimes l}$ of rank $r$ (where $r\ll d$), can variants of gradient descent find a rank $m$ decomposition where $m > r$? We show that in a lazy training regime (similar to the NTK regime for neural networks) one needs at least $m = \Omega(d^{l-1})$, while a variant of gradient descent can find an approximate tensor when $m = O^*(r^{2.5l}\log
Beyond Lazy Training for Over-parameterized Tensor Decomposition
Over-parametrization is an important technique in training neural networks. In both theory and practice, training a larger network allows the optimization algorithm to avoid bad local optimal solutions. In this paper we study a closely related tensor decomposition problem: given an l -th order tensor in (R d) {\otimes l} of rank r (where r\ll d), can variants of gradient descent find a rank m decomposition where m r? We show that in a lazy training regime (similar to the NTK regime for neural networks) one needs at least m \Omega(d {l-1}), while a variant of gradient descent can find an approximate tensor when m O *(r {2.5l}\log d) . Our results show that gradient descent on over-parametrized objective could go beyond the lazy training regime and utilize certain low-rank structure in the data.
Review for NeurIPS paper: Beyond Lazy Training for Over-parameterized Tensor Decomposition
The main concern is about the utilization of the over-parameterization in tensor decomposition. In other words, for a rank-r tensor, tensor decomposition aims to find the r components which have physical interpretation. However, the proposed approach instead finds m O(r {2.5 \ell}) components, which could be far away from the target r components. For example, even for third-order tensor, it finds m O(r 7.5) components, much larger than r. 2. Perhaps the goal of this paper is to understand the effect of over-parameterization in tensor decomposition. However, if this is the case, the objective function is quite different to the classical one, and the algorithm is also different to simple gradient descent.
Review for NeurIPS paper: Beyond Lazy Training for Over-parameterized Tensor Decomposition
This is a good contribution, with highly non-trivial theoretical results about the role of over-parameterization in tensor decomposition. Some reviewers are worried about the lack of numerical experiments and the weak connection to practical algorithms, but this is acceptable for papers with solid theoretical contribution at NeurIPS. The authors make it clear in the rebuttal that their goal is not develop an algorithm for neural networks. They also promised to add some numerical experiments. For these reasons, I recommend accept (poster).
Beyond Lazy Training for Over-parameterized Tensor Decomposition
Over-parametrization is an important technique in training neural networks. In both theory and practice, training a larger network allows the optimization algorithm to avoid bad local optimal solutions. In this paper we study a closely related tensor decomposition problem: given an l -th order tensor in (R d) {\otimes l} of rank r (where r\ll d), can variants of gradient descent find a rank m decomposition where m r? We show that in a lazy training regime (similar to the NTK regime for neural networks) one needs at least m \Omega(d {l-1}), while a variant of gradient descent can find an approximate tensor when m O *(r {2.5l}\log d) . Our results show that gradient descent on over-parametrized objective could go beyond the lazy training regime and utilize certain low-rank structure in the data.