Multiclass spectral feature scaling method for dimensionality reduction
Matsuda, Momo, Morikuni, Keiichi, Imakura, Akira, Ye, Xiucai, Sakurai, Tetsuya
Dimensionality reduction is a technique for reducing the number of variables of data samples and has been successfully applied in many fields to make machine learning algorithms faster and more accurate, including the pathological diagnoses of gene expression data [26], the analysis of chemical sensor data [16], the community detection in social networks [27], the analyses of neural spike sorting [1], and others [22]. Due to their dependence on label information, dimensionality reduction methods can be divided into supervised and unsupervised methods. Typical unsupervised dimensionality reduction methods are the principal component analysis (PCA) [12, 15], the classical multidimensional scaling (MDS) [4], the locality preserving projections (LPP) [11], and the t-distributed stochastic neighbor embedding (t-SNE) [28]. To make use of prior knowledge on the labels, we focus on supervised dimensionality reduction methods. Supervised dimensionality reduction methods map data samples into an optimal low-dimensional space for satisfactory classification while incorporating the label information. One of the most popular supervised dimensionality reduction methods is the linear discriminant analysis (LDA) [3], which maximizes the between-class scatter and reduces the within-class scatter in a low-dimensional space.
Oct-16-2019
- Genre:
- Research Report (0.82)
- Industry:
- Energy > Oil & Gas
- Upstream (0.41)
- Health & Medicine (0.66)
- Energy > Oil & Gas
- Technology: