A Closed Form Solution to Best Rank-1 Tensor Approximation via KL divergence Minimization
Ghalamkari, Kazu, Sugiyama, Mahito
Tensor decomposition is a fundamentally challenging problem. Even the simplest case of tensor decomposition, the rank-1 approximation in terms of the Least Squares (LS) error, is known to be NP-hard. Here, we show that, if we consider the KL divergence instead of the LS error, we can analytically derive a closed form solution for the rank-1 tensor that minimizes the KL divergence from a given positive tensor. Our key insight is to treat a positive tensor as a probability distribution and formulate the process of rank-1 approximation as a projection onto the set of rank-1 tensors. This enables us to solve rank-1 approximation by convex optimization. We empirically demonstrate that our algorithm is an order of magnitude faster than the existing rank-1 approximation methods and gives better approximation of given tensors, which supports our theoretical finding.
Mar-4-2021
- Country:
- Africa > Senegal
- Kolda Region > Kolda (0.05)
- Asia > Japan
- Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.14)
- North America > United States
- South America > Brazil (0.04)
- Africa > Senegal
- Genre:
- Research Report (0.82)
- Industry:
- Leisure & Entertainment (0.46)
- Technology: