Curvature Augmented Manifold Embedding and Learning

Liu, Yongming

arXiv.org Machine Learning 

Dimension reduction (DR) is a long-lasting and focused area in engineering, science, and machine learning communities. It may have different names and preferences depending on the individual field. For example, in engineering, it can referred to as reduced-order modeling, and it is closely related to data visualization in machine learning. The core concept is to solve the curse of dimensionality by projecting the data features to a low dimensional space (2D or 3D for data visualization problems, but not necessary for general DR problems). Once the low-dimensional data structure is obtained, many analyses, such as classification and regression, can be done conveniently compared to their counterparts in the high-dimensional spaces. The DR method can be traced back to the most widely used principal component analysis (PCA) [1], a linear DR method based on the eigenvalue problems of all data points. PCA has alternative names in engineering and science, such as proper orthogonal decomposition [2] in structural dynamics and Kahunen-Leove expansion in engineering statistics[3]. The nonlinear DR method has been proposed to improve the apparent limitation of the linear DR method, such as locally linear embedding (LLE)[4], ISOMAP[5], and Laplacian Eignemap[6], among many others. A detailed review of these earlier developments can be found in [7].

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found