Goto

Collaborating Authors

 Deng, Nai-Yang


Principal Component Analysis Based on T$\ell_1$-norm Maximization

arXiv.org Machine Learning

Classical principal component analysis (PCA) may suffer from the sensitivity to outliers and noise. Therefore PCA based on $\ell_1$-norm and $\ell_p$-norm ($0 < p < 1$) have been studied. Among them, the ones based on $\ell_p$-norm seem to be most interesting from the robustness point of view. However, their numerical performance is not satisfactory. Note that, although T$\ell_1$-norm is similar to $\ell_p$-norm ($0 < p < 1$) in some sense, it has the stronger suppression effect to outliers and better continuity. So PCA based on T$\ell_1$-norm is proposed in this paper. Our numerical experiments have shown that its performance is superior than PCA-$\ell_p$ and $\ell_p$SPCA as well as PCA, PCA-$\ell_1$ obviously.


Single Versus Union: Non-parallel Support Vector Machine Frameworks

arXiv.org Machine Learning

JOURNAL OF L A T EX CLASS FILES, VOL., NO., 1 Single V ersus Union: Nonparallel Support V ector Machine Frameworks Chun-Na Li, Y uan-Hai Shao, Huajun Wang, Y u-Ting Zhao, Ling-Wei Huang, Naihua Xiu and Nai-Y ang Deng Abstract --Considering the classification problem, we summarize the nonparallel support vector machines with the nonparallel hyperplanes to two types of frameworks. It solves a series of small optimization problems to obtain a series of hyperplanes, but is hard to measure the loss of each sample. The other type constructs all the hyperplanes simultaneously, and it solves one big optimization problem with the ascertained loss of each sample. We give the characteristics of each framework and compare them carefully. In addition, based on the second framework, we construct a max-min distance-based nonparallel support vector machine for multiclass classification problem, called NSVM. Experimental results on benchmark data sets and human face databases show the advantages of our NSVM. I NTRODUCTION F OR binary classification problem, the generalized eigenvalue proximal support vector machine (GEPSVM) was proposed by Mangasarian and Wild [1] in 2006, which is the first nonparallel support vector machine. It aims at generating two nonparallel hyperplanes such that each hyperplane is closer to its class and as far as possible from the other class. GEPSVM is effective, particularly when dealing with the "Xor"-type data [1]. This leads to extensive studies on nonparallel support vector machines (NSVMs) [2]-[5].


Robust Bhattacharyya bound linear discriminant analysis through adaptive algorithm

arXiv.org Machine Learning

In this paper, we propose a novel linear discriminant analysis criterion via the Bhattacharyya error bound estimation based on a novel L1-norm (L1BLDA) and L2-norm (L2BLDA). Both L1BLDA and L2BLDA maximize the between-class scatters which are measured by the weighted pairwise distances of class means and meanwhile minimize the within-class scatters under the L1-norm and L2-norm, respectively. The proposed models can avoid the small sample size (SSS) problem and have no rank limit that may encounter in LDA. It is worth mentioning that, the employment of L1-norm gives a robust performance of L1BLDA, and L1BLDA is solved through an effective non-greedy alternating direction method of multipliers (ADMM), where all the projection vectors can be obtained once for all. In addition, the weighting constants of L1BLDA and L2BLDA between the between-class and within-class terms are determined by the involved data set, which makes our L1BLDA and L2BLDA adaptive. The experimental results on both benchmark data sets as well as the handwritten digit databases demonstrate the effectiveness of the proposed methods.


Generalized two-dimensional linear discriminant analysis with regularization

arXiv.org Machine Learning

Recent advances show that two-dimensional linear discriminant analysis (2DLDA) is a successful matrix based dimensionality reduction method. However, 2DLDA may encounter the singularity issue theoretically and the sensitivity to outliers. In this paper, a generalized Lp-norm 2DLDA framework with regularization for an arbitrary $p>0$ is proposed, named G2DLDA. There are mainly two contributions of G2DLDA: one is G2DLDA model uses an arbitrary Lp-norm to measure the between-class and within-class scatter, and hence a proper $p$ can be selected to achieve the robustness. The other one is that by introducing an extra regularization term, G2DLDA achieves better generalization performance, and solves the singularity problem. In addition, G2DLDA can be solved through a series of convex problems with equality constraint, and it has closed solution for each single problem. Its convergence can be guaranteed theoretically when $1\leq p\leq2$. Preliminary experimental results on three contaminated human face databases show the effectiveness of the proposed G2DLDA.