Not enough data to create a plot.
Try a different view from the menu above.
Chamadia, Shubham
Outlier Processing Via L1-Principal Subspaces
Chamadia, Shubham (University at Buffalo) | Pados, Dimitris A. (University at Buffalo)
With the advent of big data, there is a growing demand for smart algorithms that can extract relevant information from high-dimensional large data sets, potentially corrupted by faulty measurements (outliers). In this context, we present a novel line of research that utilizes the robust nature of L1-norm subspaces for data dimensionality reduction and outlier processing. Specifically, (i) we use the euclidean-distance between original and L1-norm-subspace projected samples as a metric to assign weight to each sample point, (ii) perform (K=2)-means clustering over the one-dimensional weights discarding samples corresponding to the outlier cluster, and (iii) compute the robust L1-norm principal subspaces over the reduced “clean” data set for further applications. Numerical studies included in this paper from the fields of (i) data dimesnionality reduction, (ii) direction-of-arrival estimation, (iii) image fusion, and (iv) video foreground extarction demonstrate the efficacy of the proposed outlier processing algorithm in designing robust low-dimensional subspaces from faulty high-dimensional data.
Efficient L1-Norm Principal-Component Analysis via Bit Flipping
Markopoulos, Panos P., Kundu, Sandipan, Chamadia, Shubham, Pados, Dimitris A.
It was shown recently that the $K$ L1-norm principal components (L1-PCs) of a real-valued data matrix $\mathbf X \in \mathbb R^{D \times N}$ ($N$ data samples of $D$ dimensions) can be exactly calculated with cost $\mathcal{O}(2^{NK})$ or, when advantageous, $\mathcal{O}(N^{dK - K + 1})$ where $d=\mathrm{rank}(\mathbf X)$, $K