Goto

Collaborating Authors

 bisection



A Appendix: Proofs and Algorithms A.1 Proofs of results in Section 4 Proof of Proposition 4.1. Plug B

Neural Information Processing Systems

(Bertsekas, 1999). Algorithm 1. Furthermore, we call ˆ f (), X We can show that | f () ˆ f () |, 8 2 [, ] . Besides, computing the upper bound claimed in Proposition 4.2 requires finding The second equality is from the fact that the objective function is affine w.r.t. Finally, we verify the rest two components. Finally, we verify the rest two components. This finishes the proof of our claim.



Rate-optimal community detection near the KS threshold via node-robust algorithms

Ding, Jingqiu, Hua, Yiding, Lindberg, Kasper, Steurer, David, Storozhenko, Aleksandr

arXiv.org Machine Learning

We study community detection in the \emph{symmetric $k$-stochastic block model}, where $n$ nodes are evenly partitioned into $k$ clusters with intra- and inter-cluster connection probabilities $p$ and $q$, respectively. Our main result is a polynomial-time algorithm that achieves the minimax-optimal misclassification rate \begin{equation*} \exp \Bigl(-\bigl(1 \pm o(1)\bigr) \tfrac{C}{k}\Bigr), \quad \text{where } C = (\sqrt{pn} - \sqrt{qn})^2, \end{equation*} whenever $C \ge K\,k^2\,\log k$ for some universal constant $K$, matching the Kesten--Stigum (KS) threshold up to a $\log k$ factor. Notably, this rate holds even when an adversary corrupts an $η\le \exp\bigl(- (1 \pm o(1)) \tfrac{C}{k}\bigr)$ fraction of the nodes. To the best of our knowledge, the minimax rate was previously only attainable either via computationally inefficient procedures [ZZ15] or via polynomial-time algorithms that require strictly stronger assumptions such as $C \ge K k^3$ [GMZZ17]. In the node-robust setting, the best known algorithm requires the substantially stronger condition $C \ge K k^{102}$ [LM22]. Our results close this gap by providing the first polynomial-time algorithm that achieves the minimax rate near the KS threshold in both settings. Our work has two key technical contributions: (1) we robustify majority voting via the Sum-of-Squares framework, (2) we develop a novel graph bisection algorithm via robust majority voting, which allows us to significantly improve the misclassification rate to $1/\mathrm{poly}(k)$ for the initial estimation near the KS threshold.


On the Robustness of Spectral Algorithms for Semirandom Stochastic Block Models Aditya Bhaskara Agastya Vibhuti Jha Michael Kapralov

Neural Information Processing Systems

In a graph bisection problem, we are given a graph G with two equally-sized unlabeled communities, and the goal is to recover the vertices in these communities. A popular heuristic, known as spectral clustering, is to output an estimated community assignment based on the eigenvector corresponding to the second smallest eigenvalue of the Laplacian of G. Spectral algorithms can be shown to provably recover the cluster structure for graphs generated from certain probabilistic models, such as the Stochastic Block Model (SBM). However, spectral clustering is known to be non-robust to model mis-specification. Techniques based on semidef-inite programming have been shown to be more robust, but they incur significant computational overheads. In this work, we study the robustness of spectral algorithms against semirandom adversaries.





Export Reviews, Discussions, Author Feedback and Meta-Reviews

Neural Information Processing Systems

The KPFM random graph model is as defined in lines 66-69 where ([n], S) is a weighted graph that admits a pref. What we **should have** said at the bottom of p.2 is that we use independent sampling of edges only to prove concentration of hat{L}. The result extends to other graph models with dependent edges (e.g graph lifts) if one can prove concentration. "Misclustered" is defined the usual way: p_err (1/n)*(min over all permutations of cluster labels of the Hamming distance between label vectors). Alg 1 is almost that of [13,21] (there, columns are normalized after step 3 not before).


On the Robustness of Spectral Algorithms for Semirandom Stochastic Block Models

Bhaskara, Aditya, Jha, Agastya Vibhuti, Kapralov, Michael, Manoj, Naren Sarayu, Mazzali, Davide, Wrzos-Kaminska, Weronika

arXiv.org Machine Learning

In a graph bisection problem, we are given a graph $G$ with two equally-sized unlabeled communities, and the goal is to recover the vertices in these communities. A popular heuristic, known as spectral clustering, is to output an estimated community assignment based on the eigenvector corresponding to the second smallest eigenvalue of the Laplacian of $G$. Spectral algorithms can be shown to provably recover the cluster structure for graphs generated from certain probabilistic models, such as the Stochastic Block Model (SBM). However, spectral clustering is known to be non-robust to model mis-specification. Techniques based on semidefinite programming have been shown to be more robust, but they incur significant computational overheads. In this work, we study the robustness of spectral algorithms against semirandom adversaries. Informally, a semirandom adversary is allowed to ``helpfully'' change the specification of the model in a way that is consistent with the ground-truth solution. Our semirandom adversaries in particular are allowed to add edges inside clusters or increase the probability that an edge appears inside a cluster. Semirandom adversaries are a useful tool to determine the extent to which an algorithm has overfit to statistical assumptions on the input. On the positive side, we identify classes of semirandom adversaries under which spectral bisection using the _unnormalized_ Laplacian is strongly consistent, i.e., it exactly recovers the planted partitioning. On the negative side, we show that in these classes spectral bisection with the _normalized_ Laplacian outputs a partitioning that makes a classification mistake on a constant fraction of the vertices. Finally, we demonstrate numerical experiments that complement our theoretical findings.