Goto

Collaborating Authors

 bisection



A Appendix: Proofs and Algorithms A.1 Proofs of results in Section 4 Proof of Proposition 4.1. Plug B

Neural Information Processing Systems

(Bertsekas, 1999). Algorithm 1. Furthermore, we call ˆ f (), X We can show that | f () ˆ f () |, 8 2 [, ] . Besides, computing the upper bound claimed in Proposition 4.2 requires finding The second equality is from the fact that the objective function is affine w.r.t. Finally, we verify the rest two components. Finally, we verify the rest two components. This finishes the proof of our claim.



SetupKit: Efficient Multi-Corner Setup/Hold Time Characterization Using Bias-Enhanced Interpolation and Active Learning

Zhou, Junzhuo, Wang, Ziwen, Xia, Haoxuan, Yan, Yuxin, Zhu, Chengyu, Lin, Ting-Jung, Xing, Wei, He, Lei

arXiv.org Artificial Intelligence

Accurate setup/hold time characterization is crucial for modern chip timing closure, but its reliance on potentially millions of SPICE simulations across diverse process-voltagetemperature (PVT) corners creates a major bottleneck, often lasting weeks or months. Existing methods suffer from slow search convergence and inefficient exploration, especially in the multi-corner setting. We introduce SetupKit, a novel framework designed to break this bottleneck using statistical intelligence, circuit analysis and active learning (AL). SetupKit integrates three key innovations: BEIRA, a bias-enhanced interpolation search derived from statistical error modeling to accelerate convergence by overcoming stagnation issues, initial search interval estimation by circuit analysis and AL strategy using Gaussian Process. This AL component intelligently learns PVT-timing correlations, actively guiding the expensive simulations to the most informative corners, thus minimizing redundancy in multicorner characterization. Evaluated on industrial 22nm standard cells across 16 PVT corners, SetupKit demonstrates a significant 2.4x overall CPU time reduction (from 720 to 290 days on a single core) compared to standard practices, drastically cutting characterization time. SetupKit offers a principled, learningbased approach to library characterization, addressing a critical EDA challenge and paving the way for more intelligent simulation management.


Rate-optimal community detection near the KS threshold via node-robust algorithms

Ding, Jingqiu, Hua, Yiding, Lindberg, Kasper, Steurer, David, Storozhenko, Aleksandr

arXiv.org Machine Learning

We study community detection in the \emph{symmetric $k$-stochastic block model}, where $n$ nodes are evenly partitioned into $k$ clusters with intra- and inter-cluster connection probabilities $p$ and $q$, respectively. Our main result is a polynomial-time algorithm that achieves the minimax-optimal misclassification rate \begin{equation*} \exp \Bigl(-\bigl(1 \pm o(1)\bigr) \tfrac{C}{k}\Bigr), \quad \text{where } C = (\sqrt{pn} - \sqrt{qn})^2, \end{equation*} whenever $C \ge K\,k^2\,\log k$ for some universal constant $K$, matching the Kesten--Stigum (KS) threshold up to a $\log k$ factor. Notably, this rate holds even when an adversary corrupts an $η\le \exp\bigl(- (1 \pm o(1)) \tfrac{C}{k}\bigr)$ fraction of the nodes. To the best of our knowledge, the minimax rate was previously only attainable either via computationally inefficient procedures [ZZ15] or via polynomial-time algorithms that require strictly stronger assumptions such as $C \ge K k^3$ [GMZZ17]. In the node-robust setting, the best known algorithm requires the substantially stronger condition $C \ge K k^{102}$ [LM22]. Our results close this gap by providing the first polynomial-time algorithm that achieves the minimax rate near the KS threshold in both settings. Our work has two key technical contributions: (1) we robustify majority voting via the Sum-of-Squares framework, (2) we develop a novel graph bisection algorithm via robust majority voting, which allows us to significantly improve the misclassification rate to $1/\mathrm{poly}(k)$ for the initial estimation near the KS threshold.


On the Robustness of Spectral Algorithms for Semirandom Stochastic Block Models Aditya Bhaskara Agastya Vibhuti Jha Michael Kapralov

Neural Information Processing Systems

In a graph bisection problem, we are given a graph G with two equally-sized unlabeled communities, and the goal is to recover the vertices in these communities. A popular heuristic, known as spectral clustering, is to output an estimated community assignment based on the eigenvector corresponding to the second smallest eigenvalue of the Laplacian of G. Spectral algorithms can be shown to provably recover the cluster structure for graphs generated from certain probabilistic models, such as the Stochastic Block Model (SBM). However, spectral clustering is known to be non-robust to model mis-specification. Techniques based on semidef-inite programming have been shown to be more robust, but they incur significant computational overheads. In this work, we study the robustness of spectral algorithms against semirandom adversaries.





Export Reviews, Discussions, Author Feedback and Meta-Reviews

Neural Information Processing Systems

The KPFM random graph model is as defined in lines 66-69 where ([n], S) is a weighted graph that admits a pref. What we **should have** said at the bottom of p.2 is that we use independent sampling of edges only to prove concentration of hat{L}. The result extends to other graph models with dependent edges (e.g graph lifts) if one can prove concentration. "Misclustered" is defined the usual way: p_err (1/n)*(min over all permutations of cluster labels of the Hamming distance between label vectors). Alg 1 is almost that of [13,21] (there, columns are normalized after step 3 not before).