Graph Embedding with Data Uncertainty

Laakom, Firas, Raitoharju, Jenni, Passalis, Nikolaos, Iosifidis, Alexandros, Gabbouj, Moncef

arXiv.org Artificial Intelligence 

However, the impracticability of working in high dimensional spaces due to the curse of dimensionality and the realization that the data in many problems reside on manifolds with much lower dimensions than those of the original space, has led to the development of spectral-based subspace learning (SL) techniques. Spectral-based methods rely on the eigenanalysis of Scatter matrices. SL aims at determining a mapping of the original high-dimensional space into a lower-dimensional space preserving properties of interest in the input data. This mapping can be obtained using unsupervised methods, such as Principal Component Analysis (PCA) [1, 2], or supervised ones, such as Linear Discriminant Analysis (LDA) [3] and Marginal Fisher Analysis (MFA) [4]. Despite the different motivations of these spectral-based methods, a general formulation known as Graph Embedding was introduced in [4] to unify them within a common framework. For low-dimensional data, where dimensionality reduction is not needed and classification algorithms can be applied directly, many extensions modeling input data inaccuracies have recently been proposed [5, 6].

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found