Plotting

 Chang, Xiangyu


Randomized spectral co-clustering for large-scale directed networks

arXiv.org Machine Learning

Directed networks are generally used to represent asymmetric relationships among units. Co-clustering aims to cluster the senders and receivers of directed networks simultaneously. In particular, the well-known spectral clustering algorithm could be modified as the spectral co-clustering to co-cluster directed networks. However, large-scale networks pose computational challenge to it. In this paper, we leverage randomized sketching techniques to accelerate the spectral co-clustering algorithms in order to co-cluster large-scale directed networks more efficiently. Specifically, we derive two series of randomized spectral co-clustering algorithms, one is random-projection-based and the other is random-sampling-based. Theoretically, we analyze the resulting algorithms under two generative models\textendash the \emph{stochastic co-block model} and the \emph{degree corrected stochastic co-block model}. The approximation error rates and misclustering error rates of proposed two randomized spectral co-clustering algorithms are established, which indicate better bounds than the state-of-the-art results of co-clustering literature. Numerically, we conduct simulations to support our theoretical results and test the efficiency of the algorithms on real networks with up to tens of millions of nodes.


Learning rates for classification with Gaussian kernels

arXiv.org Machine Learning

This paper aims at refined error analysis for binary classification using support vector machine (SVM) with Gaussian kernel and convex loss. Our first result shows that for some loss functions such as the truncated quadratic loss and quadratic loss, SVM with Gaussian kernel can reach the almost optimal learning rate, provided the regression function is smooth. Our second result shows that, for a large number of loss functions, under some Tsybakov noise assumption, if the regression function is infinitely smooth, then SVM with Gaussian kernel can achieve the learning rate of order $m^{-1}$, where $m$ is the number of samples.


Sparse K-Means with $\ell_{\infty}/\ell_0$ Penalty for High-Dimensional Data Clustering

arXiv.org Machine Learning

Sparse clustering, which aims to find a proper partition of an extremely high-dimensional data set with redundant noise features, has been attracted more and more interests in recent years. The existing studies commonly solve the problem in a framework of maximizing the weighted feature contributions subject to a $\ell_2/\ell_1$ penalty. Nevertheless, this framework has two serious drawbacks: One is that the solution of the framework unavoidably involves a considerable portion of redundant noise features in many situations, and the other is that the framework neither offers intuitive explanations on why this framework can select relevant features nor leads to any theoretical guarantee for feature selection consistency. In this article, we attempt to overcome those drawbacks through developing a new sparse clustering framework which uses a $\ell_{\infty}/\ell_0$ penalty. First, we introduce new concepts on optimal partitions and noise features for the high-dimensional data clustering problems, based on which the previously known framework can be intuitively explained in principle. Then, we apply the suggested $\ell_{\infty}/\ell_0$ framework to formulate a new sparse k-means model with the $\ell_{\infty}/\ell_0$ penalty ($\ell_0$-k-means for short). We propose an efficient iterative algorithm for solving the $\ell_0$-k-means. To deeply understand the behavior of $\ell_0$-k-means, we prove that the solution yielded by the $\ell_0$-k-means algorithm has feature selection consistency whenever the data matrix is generated from a high-dimensional Gaussian mixture model. Finally, we provide experiments with both synthetic data and the Allen Developing Mouse Brain Atlas data to support that the proposed $\ell_0$-k-means exhibits better noise feature detection capacity over the previously known sparse k-means with the $\ell_2/\ell_1$ penalty ($\ell_1$-k-means for short).