Dimensionality Reduction


Dimensionality Reduction 101 for Dummies like Me

#artificialintelligence

Let's starts with the WHY we need to perform Dimensionality Reduction before analyzing data and coming down to some inferences, it is often necessary to visualize the data set, in order to get an idea of it. But, nowadays data sets contain a lot of random variables (also called features) due to which it becomes difficult in visualizing the data set. Sometimes it is even impossible to visualize such high dimensional data as we humans fall astray after we reach a dimension higher than 3. Here is where we come across dimensionality reduction. The process of reducing the number of random variables of the data set under consideration, via obtaining a set of principal variables.


Multiclass spectral feature scaling method for dimensionality reduction

arXiv.org Machine Learning

Dimensionality reduction is a technique for reducing the number of variables of data samples and has been successfully applied in many fields to make machine learning algorithms faster and more accurate, including the pathological diagnoses of gene expression data [26], the analysis of chemical sensor data [16], the community detection in social networks [27], the analyses of neural spike sorting [1], and others [22]. Due to their dependence on label information, dimensionality reduction methods can be divided into supervised and unsupervised methods. Typical unsupervised dimensionality reduction methods are the principal component analysis (PCA) [12, 15], the classical multidimensional scaling (MDS) [4], the locality preserving projections (LPP) [11], and the t-distributed stochastic neighbor embedding (t-SNE) [28]. To make use of prior knowledge on the labels, we focus on supervised dimensionality reduction methods. Supervised dimensionality reduction methods map data samples into an optimal low-dimensional space for satisfactory classification while incorporating the label information. One of the most popular supervised dimensionality reduction methods is the linear discriminant analysis (LDA) [3], which maximizes the between-class scatter and reduces the within-class scatter in a low-dimensional space.


TriMap: Large-scale Dimensionality Reduction Using Triplets

arXiv.org Machine Learning

B M ORE V ISUALIZATIONS We compare the results of TriMap to LargeVis in Figure 7 and 8. We also provide more visualizations obtained using TriMap in Figure 9. C D ISCUSSION We briefly discuss the results of TriMap and draw a comparison to the other methods. TriMap generally provides better global accuracy compared to the competing methods. It also successfully maintains the continuity of the underlying manifold. This can be seen from the COIL-20 result where certain clusters are located farther away from the remaining clusters. However, the underlying structure for the main cluster resembles the one provided by the other methods. TriMap also preserves the continuous structure in the Fashion MNIST and the TV News datasets. TriMap is also efficient in uncovering the possible outliers in the data. For instance, PCA reveals a large number of outliers in the Tabula Muris and the 360 K Lyrics datasets.


Laplacian Matrix for Dimensionality Reduction and Clustering

arXiv.org Machine Learning

Many problems in machine learning can be expressed by means of a graph with nodes representing training samples and edges representing the relationship between samples in terms of similarity, temporal proximity, or label information. Graphs can in turn be represented by matrices. A special example is the Laplacian matrix, which allows us to assign each node a value that varies only little between strongly connected nodes and more between distant nodes. Such an assignment can be used to extract a useful feature representation, find a good embedding of data in a low dimensional space, or perform clustering on the original samples. In these lecture notes we first introduce the Laplacian matrix and then present a small number of algorithms designed around it.


Comprehensive Guide to 12 Dimensionality Reduction Techniques

#artificialintelligence

Have you ever worked on a dataset with more than a thousand features? I have, and let me tell you it's a very challenging task, especially if you don't know where to start! Having a high number of variables is both a boon and a curse. It's great that we have loads of data for analysis, but it is challenging due to size. It's not feasible to analyze each and every variable at a microscopic level. It might take us days or months to perform any meaningful analysis and we'll lose a ton of time and money for our business! Not to mention the amount of computational power this will take. We need a better way to deal with high dimensional data so that we can quickly extract patterns and insights from it. So how do we approach such a dataset?


Tensor-Train Parameterization for Ultra Dimensionality Reduction

arXiv.org Machine Learning

Locality preserving projections (LPP) are a classical dimensionality reduction method based on data graph information. However, LPP is still responsive to extreme outliers. LPP aiming for vectorial data may undermine data structural information when it is applied to multidimensional data. Besides, it assumes the dimension of data to be smaller than the number of instances, which is not suitable for high-dimensional data. For high-dimensional data analysis, the tensor-train decomposition is proved to be able to efficiently and effectively capture the spatial relations. Thus, we propose a tensor-train parameterization for ultra dimensionality reduction (TTPUDR) in which the traditional LPP mapping is tensorized in terms of tensor-trains and the LPP objective is replaced with the Frobenius norm to increase the robustness of the model. The manifold optimization technique is utilized to solve the new model. The performance of TTPUDR is assessed on classification problems and TTPUDR significantly outperforms the past methods and the several state-of-the-art methods.


Spectral Overlap and a Comparison of Parameter-Free, Dimensionality Reduction Quality Metrics

arXiv.org Machine Learning

Nonlinear dimensionality reduction methods are a popular tool for data scientists and researchers to visualize complex, high dimensional data. However, while these methods continue to improve and grow in number, it is often difficult to evaluate the quality of a visualization due to a variety of factors such as lack of information about the intrinsic dimension of the data and additional tuning required for many evaluation metrics. In this paper, we seek to provide a systematic comparison of dimensionality reduction quality metrics using datasets where we know the ground truth manifold. We utilize each metric for hyperparameter optimization in popular dimensionality reduction methods used for visualization and provide quantitative metrics to objectively compare visualizations to their original manifold. In our results, we find a few methods that appear to consistently do well and propose the best performer as a benchmark for evaluating dimensionality reduction based visualizations.


Detecting Adversarial Examples through Nonlinear Dimensionality Reduction

arXiv.org Machine Learning

Deep neural networks are vulnerable to adversarial examples, i.e., carefully-perturbed inputs aimed to mislead classification. This work proposes a detection method based on combining non-linear dimensionality reduction and density estimation techniques. Our empirical findings show that the proposed approach is able to effectively detect adversarial examples crafted by non-adaptive attackers, i.e., not specifically tuned to bypass the detection method. Given our promising results, we plan to extend our analysis to adaptive attackers in future work.


Riemannian joint dimensionality reduction and dictionary learning on symmetric positive definite manifold

arXiv.org Machine Learning

Dictionary leaning (DL) and dimensionality reduction (DR) are powerful tools to analyze high-dimensional noisy signals. This paper presents a proposal of a novel Riemannian joint dimensionality reduction and dictionary learning (R-JDRDL) on symmetric positive definite (SPD) manifolds for classification tasks. The joint learning considers the interaction between dimensionality reduction and dictionary learning procedures by connecting them into a unified framework. We exploit a Riemannian optimization framework for solving DL and DR problems jointly. Finally, we demonstrate that the proposed R-JDRDL outperforms existing state-of-the-arts algorithms when used for image classification tasks.


Data Dimensionality Reduction in the Age of Machine Learning

#artificialintelligence

Machine Learning is all the rage as companies try to make sense of the mountains of data they are collecting. Data is everywhere and proliferating at unprecedented speed. But, more data is not always better. In fact, large amounts of data can not only considerably slow down the system execution but can sometimes even produce worse performances in Data Analytics applications. We have found, through years of formal and informal testing, that data dimensionality reduction -- or the process of reducing the number of attributes under consideration when running analytics -- is useful not only for speeding up algorithm execution but also for improving overall model performance.