Goto

Collaborating Authors

 dasgupta


Hierarchical Clustering Beyond the Worst-Case

Neural Information Processing Systems

Hiererachical clustering, that is computing a recursive partitioning of a dataset to obtain clusters at increasingly finer granularity is a fundamental problem in data analysis. Although hierarchical clustering has mostly been studied through procedures such as linkage algorithms, or top-down heuristics, rather than as optimization problems, recently Dasgupta [1] proposed an objective function for hierarchical clustering and initiated a line of work developing algorithms that explicitly optimize an objective (see also [2, 3, 4]). In this paper, we consider a fairly general random graph model for hierarchical clustering, called the hierarchical stochastic blockmodel (HSBM), and show that in certain regimes the SVD approach of McSherry [5] combined with specific linkage methods results in a clustering that give an O(1)-approximation to Dasgupta's cost function. We also show that an approach based on SDP relaxations for balanced cuts based on the work of Makarychev et al. [6], combined with the recursive sparsest cut algorithm of Dasgupta, yields an O(1) approximation in slightly larger regimes and also in the semi-random setting, where an adversary may remove edges from the random graph generated according to an HSBM. Finally, we report empirical evaluation on synthetic and real-world data showing that our proposed SVD-based method does indeed achieve a better cost than other widely-used heurstics and also results in a better classification accuracy when the underlying problem was that of multi-class classification.






An adaptive nearest neighbor rule for classification

Akshay Balsubramani, Sanjoy Dasgupta, yoav Freund, Shay Moran

Neural Information Processing Systems

Findthesmallest0 (n, k, ), where (n, k, )= c1 r logn+ log ( 1/ ) k . Then, withprobabilityatleast1 , theresultingclassifiergn satisfiesthefollowing: foreverypointx 2 supp(µ), if n C adv (x) max log 1 adv (x) , log 1 thengn(x)= g (x).



ImprovedGuaranteesfork-means + + andk-means ++Parallel

Neural Information Processing Systems

Lloyd's algorithm uses iterative improvements to find a locally optimalk-means clustering. The performance of Lloyd'salgorithm crucially depends on the quality of the initial clustering, which isdefined bytheinitial setofcenters, called aseed.



On the cohesion and separability of average-link for hierarchical agglomerative clustering

Neural Information Processing Systems

Average-link is widely recognized as one of the most popular and effective methods for building hierarchical agglomerative clustering. The available theoretical analyses show that this method has a much better approximation than other popular heuristics, as single-linkage and complete-linkage, regarding variants of Dasgupta's cost function [STOC 2016]. However, these analyses do not separate average-link from a random hierarchy and they are not appealing for metric spaces since every hierarchical clustering has a $1/2$ approximation with regard to the variant of Dasgupta's functionthat is employed for dissimilarity measures [Moseley and Yang 2020]. In this paper, we present a comprehensive study of the performance of \avglink \, in metric spaces, regarding several natural criteria that capture separability and cohesion, and are more interpretable than Dasgupta's cost function and its variants. We also present experimental results with real datasets that, together with our theoretical analyses, suggest that average-link is a better choice than other related methods when both cohesion and separability are important goals.