Experimental Analysis of Legendre Decomposition in Machine Learning

Pang, Jianye, Yi, Kai, Yin, Wanguang, Xu, Min

arXiv.org Machine Learning 

Matrix and tensor decomposition is the multiplication of a number of smaller matrices or tensors that are approximately disassembled by matrix and tensor. Up to now, the main matrix decomposition techniques have been widely used in computer vision, recommendation system, signal processing and other fields. Currently, standard methods for thirdorder nonnegative tensor decomposition include CP decomposition[1] and Tucker decomposition[2]. It's well known the normal nonnegative Tucker and CP tensor decomposition include non-convex optimization and that the global convergence is not guaranteed. One direction is to apply additional assumptions on data, such as a bounded variance, to transform the non-convex optimization problem into a convex one[3, 4]. Legendre decomposition[5] is a new nonnegative tensor decomposition method proposed by Mahito Sugiyama et al. Compared with the existing nonnegative tensor decomposition methods, the greatest contribution of Legendre decomposition lies in the transformation of the non-convex optimization problem onto a convex submanifold space without additional assumptions, which ensures global convergence, and the use of gradient descent can find a unique reconstructed tensor satisfying and the minimum Kullback-Leibler (KL) divergence from the input matrix. In this paper, we analyze Legendre tensor decomposition in both theory and application. From the perspective of theory, we aim to analyze the properties of dual parameters and dually flat manifold introduced in Legendre tensor decomposition.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found