Goto

Collaborating Authors

 Principal Component Analysis


Robust Transfer Principal Component Analysis with Rank Constraints

Neural Information Processing Systems

Principal component analysis (PCA), a well-established technique for data analysis and processing, provides a convenient form of dimensionality reduction that is effective for cleaning small Gaussian noises presented in the data. However, the applicability of standard principal component analysis in real scenarios is limited by its sensitivity to large errors. In this paper, we tackle the challenge problem of recovering data corrupted with errors of high magnitude by developing a novel robust transfer principal component analysis method. Our method is based on the assumption that useful information for the recovery of a corrupted data matrix can be gained from an uncorrupted related data matrix. Specifically, we formulate the data recovery problem as a joint robust principal component analysis problem on the two data matrices, with shared common principal components across matrices and individual principal components specific to each data matrix.


Online Robust PCA via Stochastic Optimization

Neural Information Processing Systems

Robust PCA methods are typically based on batch optimization and have to load all the samples into memory. This prevents them from efficiently processing big data. In this paper, we develop an Online Robust Principal Component Analysis (OR-PCA) that processes one sample per time instance and hence its memory cost is independent of the data size, significantly enhancing the computation and storage efficiency. The proposed method is based on stochastic optimization of an equivalent reformulation of the batch RPCA method. Indeed, we show that OR-PCA provides a sequence of subspace estimations converging to the optimum of its batch counterpart and hence is provably robust to sparse corruption.


Correlated-PCA: Principal Components' Analysis when Data and Noise are Correlated

Neural Information Processing Systems

Given a matrix of observed data, Principal Components Analysis (PCA) computes a small number of orthogonal directions that contain most of its variability. Provably accurate solutions for PCA have been in use for a long time. However, to the best of our knowledge, all existing theoretical guarantees for it assume that the data and the corrupting noise are mutually independent, or at least uncorrelated. This is valid in practice often, but not always. In this paper, we study the PCA problem in the setting where the data and noise can be correlated.


Diffusion Approximations for Online Principal Component Estimation and Global Convergence

Neural Information Processing Systems

In this paper, we propose to adopt the diffusion approximation tools to study the dynamics of Oja's iteration which is an online stochastic gradient method for the principal component analysis. Oja's iteration maintains a running estimate of the true principal component from streaming data and enjoys less temporal and spatial complexities. We show that the Oja's iteration for the top eigenvector generates a continuous-state discrete-time Markov chain over the unit sphere. We characterize the Oja's iteration in three phases using diffusion approximation and weak convergence tools. Our three-phase analysis further provides a finite-sample error bound for the running estimate, which matches the minimax information lower bound for PCA under the additional assumption of bounded samples.


A Complete Guide to Principal Component Analysis -- PCA in Machine Learning

#artificialintelligence

Principal Component Analysis or PCA is a widely used technique for dimensionality reduction of the large data set. Reducing the number of components or features costs some accuracy and on the other hand, it makes the large data set simpler, easy to explore and visualize. Also, it reduces the computational complexity of the model which makes machine learning algorithms run faster. It is always a question and debatable how much accuracy it is sacrificing to get less complex and reduced dimensions data set. In this article, we will be discussing the step by step approach to achieve dimensionality reduction using PCA and then I will also show how can we do all this using python library.


Robust Principal Component Analysis Based On Maximum Correntropy Power Iterations

arXiv.org Machine Learning

Principal component analysis (PCA) is recognised as a quintessential data analysis technique when it comes to describing linear relationships between the features of a dataset. However, the well-known sensitivity of PCA to non-Gaussian samples and/or outliers often makes it unreliable in practice. To this end, a robust formulation of PCA is derived based on the maximum correntropy criterion (MCC) so as to maximise the expected likelihood of Gaussian distributed reconstruction errors. In this way, the proposed solution reduces to a generalised power iteration, whereby: (i) robust estimates of the principal components are obtained even in the presence of outliers; (ii) the number of principal components need not be specified in advance; and (iii) the entire set of principal components can be obtained, unlike existing approaches. The advantages of the proposed maximum correntropy power iteration (MCPI) are demonstrated through an intuitive numerical example.


Training a Machine Learning Model on a Dataset with Highly-Correlated Features

#artificialintelligence

In a previous article, we've shown that a covariance matrix plot can be used for feature selection and dimensionality reduction: Feature Selection and Dimensionality Reduction Using Covariance Matrix Plot. We, therefore, were able to reduce the dimension of our feature space from 6 to 4. Now suppose we want to build a model on the new feature space for predicting the crew variable. Looking at the covariance matrix plot between features, we see that there is a strong correlation between the features (predictor variables), see the image above. In this article, we shall use a technique called Principal Component Analysis (PCA) to transform our features into space where the features are independent or uncorrelated. We shall then train our model on the PCA space.


The Sparse Reverse of Principal Component Analysis for Fast Low-Rank Matrix Completion

arXiv.org Machine Learning

Matrix completion constantly receives tremendous attention from many research fields. It is commonly applied for recommender systems such as movie ratings, computer vision such as image reconstruction or completion, multi-task learning such as collaboratively modeling time-series trends of multiple sensors, and many other applications. Matrix completion techniques are usually computationally exhaustive and/or fail to capture the heterogeneity in the data. For example, images usually contain a heterogeneous set of objects, and thus it is a challenging task to reconstruct images with high levels of missing data. In this paper, we propose the sparse reverse of principal component analysis for matrix completion. The proposed approach maintains smoothness across the matrix, produces accurate estimates of the missing data, converges iteratively, and it is computationally tractable with a controllable upper bound on the number of iterations until convergence. The accuracy of the proposed technique is validated on natural images, movie ratings, and multisensor data. It is also compared with common benchmark methods used for matrix completion.


Adapting Machine Learning Algorithms to Novel Use Cases

#artificialintelligence

Measuring the kurtosis (fourth moment) of a statistical distribution may seem like a pedantic exercise for an introductory statistics class, after one learns about the first moment (mean), second moment (variance), and third moment (skewness). However, kurtosis can be used for some unexpected and significant use cases, particularly when employed as a special case of Independent Component Analysis (ICA) for unsupervised learning. ICA is a variant of PCA (Principal Component Analysis) in situations where the data distribution contains subcomponents that are statistically independent of each other, though generally not orthogonal.


Principal Component Analysis Using Structural Similarity Index for Images

arXiv.org Machine Learning

Despite the advances of deep learning in specific tasks using images, the principled assessment of image fidelity and similarity is still a critical ability to develop. As it has been shown that Mean Squared Error (MSE) is insufficient for this task, other measures have been developed with one of the most effective being Structural Similarity Index (SSIM). Such measures can be used for subspace learning but existing methods in machine learning, such as Principal Component Analysis (PCA), are based on Euclidean distance or MSE and thus cannot properly capture the structural features of images. In this paper, we define an image structure subspace which discriminates different types of image distortions. We propose Image Structural Component Analysis (ISCA) and also kernel ISCA by using SSIM, rather than Euclidean distance, in the formulation of PCA. This paper provides a bridge between image quality assessment and manifold learning opening a broad new area for future research.