Goto

Collaborating Authors

 Principal Component Analysis


Online PCA for Contaminated Data

Neural Information Processing Systems

We consider the online Principal Component Analysis (PCA) for contaminated samples (containing outliers) which are revealed sequentially to the Principal Components (PCs) estimator. Due to their sensitiveness to outliers, previous online PCA algorithms fail in this case and their results can be arbitrarily bad. Here we propose the online robust PCA algorithm, which is able to improve the PCs estimation upon an initial one steadily, even when faced with a constant fraction of outliers. We show that the final result of the proposed online RPCA has an acceptable degradation from the optimum. Actually, under mild conditions, online RPCA achieves the maximal robustness with a 50\% breakdown point.


Dirty Statistical Models

Neural Information Processing Systems

We provide a unified framework for the high-dimensional analysis of "superposition-structured" or "dirty" statistical models: where the model parameters are a "superposition" of structurally constrained parameters. We allow for any number and types of structures, and any statistical model. We consider the general class of M -estimators that minimize the sum of any loss function, and an instance of what we call a "hybrid" regularization, that is the infimal convolution of weighted regularization functions, one for each structural component. We provide corollaries showcasing our unified framework for varied statistical models such as linear regression, multiple regression and principal component analysis, over varied superposition structures.


Quantum-Assisted Hilbert-Space Gaussian Process Regression

arXiv.org Artificial Intelligence

Gaussian processes are probabilistic models that are commonly used as functional priors in machine learning. Due to their probabilistic nature, they can be used to capture the prior information on the statistics of noise, smoothness of the functions, and training data uncertainty. However, their computational complexity quickly becomes intractable as the size of the data set grows. We propose a Hilbert space approximation-based quantum algorithm for Gaussian process regression to overcome this limitation. Our method consists of a combination of classical basis function expansion with quantum computing techniques of quantum principal component analysis, conditional rotations, and Hadamard and Swap tests. The quantum principal component analysis is used to estimate the eigenvalues while the conditional rotations and the Hadamard and Swap tests are employed to evaluate the posterior mean and variance of the Gaussian process. Our method provides polynomial computational complexity reduction over the classical method.


GT-PCA: Effective and Interpretable Dimensionality Reduction with General Transform-Invariant Principal Component Analysis

arXiv.org Artificial Intelligence

Data analysis often requires methods that are invariant with respect to specific transformations, such as rotations in case of images or shifts in case of images and time series. While principal component analysis (PCA) is a widely-used dimension reduction technique, it lacks robustness with respect to these transformations. Modern alternatives, such as autoencoders, can be invariant with respect to specific transformations but are generally not interpretable. We introduce General Transform-Invariant Principal Component Analysis (GT-PCA) as an effective and interpretable alternative to PCA and autoencoders. We propose a neural network that efficiently estimates the components and show that GT-PCA significantly outperforms alternative methods in experiments based on synthetic and real data.


Federated Multilinear Principal Component Analysis with Applications in Prognostics

arXiv.org Machine Learning

The use of tensors is progressively widespread in the realms of data analytics and machine learning. As an extension of vectors and matrices, a tensor is a multi-dimensional array of numbers that provides a means to represent data across multiple dimensions. As an illustration, Figure 1 shows an image stream that can be seen as a three-dimensional tensor, where the first two dimensions denote the pixels within each image, while the third dimension represents the distinct images in the sequence. One of the advantages of representing data as a tensor, as opposed to reshaping it into a vector or matrix, lies in its ability to capture intricate relationships within the data, especially when interactions occur across multiple dimensions. For instance, the image stream depicted in Figure 1 exhibits a spatiotemporal correlation structure. Specifically, pixels within each image have spatial correlation, and pixels at the same location across multiple images are temporally correlated. Transforming the image stream into a vector or matrix would disrupt the spatiotemporal correlation structure, whereas representing it as a three-dimensional tensor preserves this correlation. In addition to capturing intricate relationships, other benefits of using tensors include compatibility with multi-modal data (i.e., accommodating diverse types of data in a unified structure) and facilitating parallel processing (i.e., enabling the parallelization of operations), etc. As a result, the volume of research in tensor-based data analytics has been rapidly increasing in recent years (Shen et al., 2022; Gahrooei et al., 2021; Yan et al., 2019; Hu et al., 2023; Zhen et al., 2023; Zhang et al., 2023).


Dimensionality Reduction and Wasserstein Stability for Kernel Regression

arXiv.org Machine Learning

In a high-dimensional regression framework, we study consequences of the naive two-step procedure where first the dimension of the input variables is reduced and second, the reduced input variables are used to predict the output variable with kernel regression. In order to analyze the resulting regression errors, a novel stability result for kernel regression with respect to the Wasserstein distance is derived. This allows us to bound errors that occur when perturbed input data is used to fit the regression function. We apply the general stability result to principal component analysis (PCA). Exploiting known estimates from the literature on both principal component analysis and kernel regression, we deduce convergence rates for the two-step procedure. The latter turns out to be particularly useful in a semi-supervised setting.


$\sigma$-PCA: a unified neural model for linear and nonlinear principal component analysis

arXiv.org Machine Learning

Linear principal component analysis (PCA), nonlinear PCA, and linear independent component analysis (ICA) -- those are three methods with single-layer autoencoder formulations for learning linear transformations from data. Linear PCA learns orthogonal transformations (rotations) that orient axes to maximise variance, but it suffers from a subspace rotational indeterminacy: it fails to find a unique rotation for axes that share the same variance. Both nonlinear PCA and linear ICA reduce the subspace indeterminacy from rotational to permutational by maximising statistical independence under the assumption of unit variance. The relationship between all three can be understood by the singular value decomposition of the linear ICA transformation into a sequence of rotation, scale, rotation. Linear PCA learns the first rotation; nonlinear PCA learns the second. The scale is simply the inverse of the standard deviations. The problem is that, in contrast to linear PCA, conventional nonlinear PCA cannot be used directly on the data to learn the first rotation, the first being special as it reduces dimensionality and orders by variances. In this paper, we have identified the cause, and as a solution we propose $\sigma$-PCA: a unified neural model for linear and nonlinear PCA as single-layer autoencoders. One of its key ingredients: modelling not just the rotation but also the scale -- the variances. This model bridges the disparity between linear and nonlinear PCA. And so, like linear PCA, it can learn a semi-orthogonal transformation that reduces dimensionality and orders by variances, but, unlike linear PCA, it does not suffer from rotational indeterminacy.


Federated Learning for Sparse Principal Component Analysis

arXiv.org Machine Learning

In the rapidly evolving realm of machine learning, algorithm effectiveness often faces limitations due to data quality and availability. Traditional approaches grapple with data sharing due to legal and privacy concerns. The federated learning framework addresses this challenge. Federated learning is a decentralized approach where model training occurs on client sides, preserving privacy by keeping data localized. Instead of sending raw data to a central server, only model updates are exchanged, enhancing data security. We apply this framework to Sparse Principal Component Analysis (SPCA) in this work. SPCA aims to attain sparse component loadings while maximizing data variance for improved interpretability. Beside the L1 norm regularization term in conventional SPCA, we add a smoothing function to facilitate gradient-based optimization methods. Moreover, in order to improve computational efficiency, we introduce a least squares approximation to original SPCA. This enables analytic solutions on the optimization processes, leading to substantial computational improvements. Within the federated framework, we formulate SPCA as a consensus optimization problem, which can be solved using the Alternating Direction Method of Multipliers (ADMM). Our extensive experiments involve both IID and non-IID random features across various data owners. Results on synthetic and public datasets affirm the efficacy of our federated SPCA approach.


On the estimation of the number of components in multivariate functional principal component analysis

arXiv.org Machine Learning

Happ and Greven [2018] develop innovative theory and methodology for the dimension reduction of multivariate functional data on possibly different dimensional domains (e.g., curves and images), which extends existing methods that were limited to either univariate functional data or multivariate functional data on a common one-dimensional domain. Recent research has shown a growing presence of data defined on different dimensional domains in diverse fields such as biomechanics, e.g., Warmenhoven et al. [2019] and neuroscience, e.g., Song and Kim [2022], so we expect the work to have significant practical impact. We aim to provide commentary on the estimation of the number of principal components utilising the methodology devised in Happ and Greven [2018]. To achieve this, we conduct an extensive simulation study and subsequently propose practical guidelines for practitioners to adeptly choose the appropriate number of components for multivariate functional datasets. For ease of presentation, we use the same notation as in Happ and Greven [2018].


Hessian Eigenvectors and Principal Component Analysis of Neural Network Weight Matrices

arXiv.org Artificial Intelligence

This study delves into the intricate dynamics of trained deep neural networks and their relationships with network parameters. Trained networks predominantly continue training in a single direction, known as the drift mode. This drift mode can be explained by the quadratic potential model of the loss function, suggesting a slow exponential decay towards the potential minima. We unveil a correlation between Hessian eigenvectors and network weights. This relationship, hinging on the magnitude of eigenvalues, allows us to discern parameter directions within the network. Notably, the significance of these directions relies on two defining attributes: the curvature of their potential wells (indicated by the magnitude of Hessian eigenvalues) and their alignment with the weight vectors. Our exploration extends to the decomposition of weight matrices through singular value decomposition. This approach proves practical in identifying critical directions within the Hessian, considering both their magnitude and curvature. Furthermore, our examination showcases the applicability of principal component analysis in approximating the Hessian, with update parameters emerging as a superior choice over weights for this purpose. Remarkably, our findings unveil a similarity between the largest Hessian eigenvalues of individual layers and the entire network. Notably, higher eigenvalues are concentrated more in deeper layers. Leveraging these insights, we venture into addressing catastrophic forgetting, a challenge of neural networks when learning new tasks while retaining knowledge from previous ones. By applying our discoveries, we formulate an effective strategy to mitigate catastrophic forgetting, offering a possible solution that can be applied to networks of varying scales, including larger architectures.