Goto

Collaborating Authors

Dimensionality Reduction


6 Dimensionality Reduction Algorithms With Python

#artificialintelligence

Dimensionality reduction is an unsupervised learning technique. Nevertheless, it can be used as a data transform pre-processing step for machine learning algorithms on classification and regression predictive modeling datasets with supervised learning algorithms. There are many dimensionality reduction algorithms to choose from and no single best algorithm for all cases. Instead, it is a good idea to explore a range of dimensionality reduction algorithms and different configurations for each algorithm. In this tutorial, you will discover how to fit and evaluate top dimensionality reduction algorithms in Python.


Linear Discriminant Analysis for Dimensionality Reduction in Python

#artificialintelligence

Reducing the number of input variables for a predictive model is referred to as dimensionality reduction. Fewer input variables can result in a simpler predictive model that may have better performance when making predictions on new data. Linear Discriminant Analysis, or LDA for short, is a predictive modeling algorithm for multi-class classification. It can also be used as a dimensionality reduction technique, providing a projection of a training dataset that best separates the examples by their assigned class. The ability to use Linear Discriminant Analysis for dimensionality reduction often surprises most practitioners.


Model-based targeted dimensionality reduction for neuronal population data

Neural Information Processing Systems

Summarizing high-dimensional data using a small number of parameters is a ubiquitous first step in the analysis of neuronal population activity. Recently developed methods use "targeted" approaches that work by identifying multiple, distinct low-dimensional subspaces of activity that capture the population response to individual experimental task variables, such as the value of a presented stimulus or the behavior of the animal. These methods have gained attention because they decompose total neural activity into what are ostensibly different parts of a neuronal computation. However, existing targeted methods have been developed outside of the confines of probabilistic modeling, making some aspects of the procedures ad hoc, or limited in flexibility or interpretability. Here we propose a new model-based method for targeted dimensionality reduction based on a probabilistic generative model of the population response data.


TensorProjection Layer: A Tensor-Based Dimensionality Reduction Method in CNN

arXiv.org Machine Learning

In this paper, we propose a dimensionality reduction method applied to tensor-structured data as a hidden layer (we call it TensorProjection Layer) in a convolutional neural network. Our proposed method transforms input tensors into ones with a smaller dimension by projection. The directions of projection are viewed as training parameters associated with our proposed layer and trained via a supervised learning criterion such as minimization of the cross-entropy loss function. We discuss the gradients of the loss function with respect to the parameters associated with our proposed layer. We also implement simple numerical experiments to evaluate the performance of the TensorProjection Layer.


Multi-Criteria Dimensionality Reduction with Applications to Fairness

Neural Information Processing Systems

Dimensionality reduction is a classical technique widely used for data analysis. One foundational instantiation is Principal Component Analysis (PCA), which minimizes the average reconstruction error. In this paper, we introduce the multi-criteria dimensionality reduction problem where we are given multiple objectives that need to be optimized simultaneously. As an application, our model captures several fairness criteria for dimensionality reduction such as the Fair-PCA problem introduced by Samadi et al. [NeurIPS18] and the Nash Social Welfare (NSW) problem. In the Fair-PCA problem, the input data is divided into k groups, and the goal is to find a single d-dimensional representation for all groups for which the maximum reconstruction error of any one group is minimized.


Learning nonlinear level sets for dimensionality reduction in function approximation

Neural Information Processing Systems

We developed a Nonlinear Level-set Learning (NLL) method for dimensionality reduction in high-dimensional function approximation with small data. This work is motivated by a variety of design tasks in real-world engineering applications, where practitioners would replace their computationally intensive physical models (e.g., high-resolution fluid simulators) with fast-to-evaluate predictive machine learning models, so as to accelerate the engineering design processes. There are two major challenges in constructing such predictive models: (a) high-dimensional inputs (e.g., many independent design parameters) and (b) small training data, generated by running extremely time-consuming simulations. Thus, reducing the input dimension is critical to alleviate the over-fitting issue caused by data insufficiency. Existing methods, including sliced inverse regression and active subspace approaches, reduce the input dimension by learning a linear coordinate transformation; our main contribution is to extend the transformation approach to a nonlinear regime.


Dimensionality reduction: theoretical perspective on practical measures

Neural Information Processing Systems

Dimensionality reduction plays a central role in real-world applications for Machine Learning, among many fields. In particular, metric dimensionality reduction where data from a general metric is mapped into low dimensional space, is often used as a first step before applying machine learning algorithms. In almost all these applications the quality of the embedding is measured by various average case criteria. Metric dimensionality reduction has also been studied in Math and TCS, within the extremely fruitful and influential field of metric embedding. Yet, the vast majority of theoretical research has been devoted to analyzing the worst case behavior of embeddings and therefore has little relevance to practical settings.


Tight Dimensionality Reduction for Sketching Low Degree Polynomial Kernels

Neural Information Processing Systems

However, in their analysis $C_{\Omega} 2$ can be as large as $\Theta(n {2q})$, even for a set $\Omega$ of $O(1)$ vectors $x$. We give a new analysis of this sketch, providing nearly optimal bounds. For the important case of $q 2$ and $\delta 1/\poly(n)$, this shows that $m \Theta(\epsilon {-2} \log(n) \epsilon {-1} \log 2(n))$, demonstrating that the $\epsilon {-2}$ and $\log 2(n)$ terms do not multiply each other. In a number of applications, one has $ \Omega \poly(n)$ and in this case our bounds are optimal up to a constant factor. Lastly, we empirically compare our sketch to other sketches for tensor products, and give a novel application to compressing neural networks.


Solving Interpretable Kernel Dimensionality Reduction

Neural Information Processing Systems

Kernel dimensionality reduction (KDR) algorithms find a low dimensional representation of the original data by optimizing kernel dependency measures that are capable of capturing nonlinear relationships. The standard strategy is to first map the data into a high dimensional feature space using kernels prior to a projection onto a low dimensional space. While KDR methods can be easily solved by keeping the most dominant eigenvectors of the kernel matrix, its features are no longer easy to interpret. Alternatively, Interpretable KDR (IKDR) is different in that it projects onto a subspace \textit{before} the kernel feature mapping, therefore, the projection matrix can indicate how the original features linearly combine to form the new features. Unfortunately, the IKDR objective requires a non-convex manifold optimization that is difficult to solve and can no longer be solved by eigendecomposition.


Dimensionality reduction to maximize prediction generalization capability

arXiv.org Machine Learning

This work develops an analytically solvable unsupervised learning scheme that extracts the most informative components for predicting future inputs, termed predictive principal component analysis (PredPCA). Our scheme can effectively remove unpredictable observation noise and globally minimize the test prediction error. Mathematical analyses demonstrate that, with sufficiently high-dimensional observations that are generated by a linear or nonlinear system, PredPCA can identify the optimal hidden state representation, true system parameters, and true hidden state dimensionality, with a global convergence guarantee. We demonstrate the performance of PredPCA by using sequential visual inputs comprising hand-digits, rotating 3D objects, and natural scenes. It reliably and accurately estimates distinct hidden states and predicts future outcomes of previously unseen test input data, even in the presence of considerable observation noise. The simple model structure and low computational cost of PredPCA make it highly desirable as a learning scheme for biological neural networks and neuromorphic chips. Prediction is essential for both biological organisms [1,2] and machine learning [3,4]. In particular, they need to predict the dynamics of newly encountered sensory input data (i.e., test data) based on and only on knowledge learned from a limited number of past experiences (i.e., training data). Generalization error is a standard measure of the generalization capability of predicting the future consequences of previously unseen input data, which is defined as the difference between the training and test prediction errors.