Not enough data to create a plot.
Try a different view from the menu above.
Krishnaswamy, Ravishankar
OOD-DiskANN: Efficient and Scalable Graph ANNS for Out-of-Distribution Queries
Jaiswal, Shikhar, Krishnaswamy, Ravishankar, Garg, Ankit, Simhadri, Harsha Vardhan, Agrawal, Sheshansh
Since solving State-of-the-art algorithms for Approximate Nearest Neighbor Search the problem exactly requires an expensive exhaustive scan of the (ANNS) such as DiskANN, FAISS-IVF, and HNSW build data dependent database - which would be impractical for real-world indices that indices that offer substantially better accuracy and search span billions of objects - practical interactive search systems use efficiency over data-agnostic indices by overfitting to the index Approximate Nearest Neighbor Search (ANNS) algorithms with data distribution. When the query data is drawn from a different highly sub-linear query complexity [10, 18, 24, 30] to answer such distribution - e.g., when index represents image embeddings and queries. The quality of such ANN indices is often measured by query represents textual embeddings - such algorithms lose much k-recall@k which is the overlap between the top-results of the of this performance advantage. On a variety of datasets, for a fixed index search with the ground truth -nearest neighbors (-NNs) in recall target, latency is worse by an order of magnitude or more for the corpus for the query, averaged over a representative query set. Out-Of-Distribution (OOD) queries as compared to In-Distribution State-of-the-art algorithms for ANNS, such as graph-based indices (ID) queries. The question we address in this work is whether ANNS [16, 24, 30] which use data-dependent index construction, algorithms can be made efficient for OOD queries if the index construction achieve better query efficiency over prior data-agnostic methods is given access to a small sample set of these queries. We like LSH [6, 18] (see Section A.1 for more details). Such efficiency answer positively by presenting OOD-DiskANN, which uses a sparing enables these indices to serve queries with > 90% recall with a sample (1% of index set size) of OOD queries, and provides up to latency of a few milliseconds, required in interactive web scenarios.
Learning Mixture of Gaussians with Streaming Data
Raghunathan, Aditi, Krishnaswamy, Ravishankar, Jain, Prateek
In this paper, we study the problem of learning a mixture of Gaussians with streaming data: given a stream of $N$ points in $d$ dimensions generated by an unknown mixture of $k$ spherical Gaussians, the goal is to estimate the model parameters using a single pass over the data stream. We analyze a streaming version of the popular Lloyd's heuristic and show that the algorithm estimates all the unknown centers of the component Gaussians accurately if they are sufficiently separated. Assuming each pair of centers are $C\sigma$ distant with $C=\Omega((k\log k)^{1/4}\sigma)$ and where $\sigma^2$ is the maximum variance of any Gaussian component, we show that asymptotically the algorithm estimates the centers optimally (up to constants); our center separation requirement matches the best known result for spherical Gaussians \citep{vempalawang}. For finite samples, we show that a bias term based on the initial estimate decreases at $O(1/{\rm poly}(N))$ rate while variance decreases at nearly optimal rate of $\sigma^2 d/N$. Our analysis requires seeding the algorithm with a good initial estimate of the true cluster centers for which we provide an online PCA based clustering algorithm. Indeed, the asymptotic per-step time complexity of our algorithm is the optimal $d\cdot k$ while space complexity of our algorithm is $O(dk\log k)$. In addition to the bias and variance terms which tend to $0$, the hard-thresholding based updates of streaming Lloyd's algorithm is agnostic to the data distribution and hence incurs an approximation error that cannot be avoided. However, by using a streaming version of the classical (soft-thresholding-based) EM method that exploits the Gaussian distribution explicitly, we show that for a mixture of two Gaussians the true means can be estimated consistently, with estimation error decreasing at nearly optimal rate, and tending to $0$ for $N\rightarrow \infty$.
Relax, no need to round: integrality of clustering formulations
Awasthi, Pranjal, Bandeira, Afonso S., Charikar, Moses, Krishnaswamy, Ravishankar, Villar, Soledad, Ward, Rachel
We study exact recovery conditions for convex relaxations of point cloud clustering problems, focusing on two of the most common optimization problems for unsupervised clustering: $k$-means and $k$-median clustering. Motivations for focusing on convex relaxations are: (a) they come with a certificate of optimality, and (b) they are generic tools which are relatively parameter-free, not tailored to specific assumptions over the input. More precisely, we consider the distributional setting where there are $k$ clusters in $\mathbb{R}^m$ and data from each cluster consists of $n$ points sampled from a symmetric distribution within a ball of unit radius. We ask: what is the minimal separation distance between cluster centers needed for convex relaxations to exactly recover these $k$ clusters as the optimal integral solution? For the $k$-median linear programming relaxation we show a tight bound: exact recovery is obtained given arbitrarily small pairwise separation $\epsilon > 0$ between the balls. In other words, the pairwise center separation is $\Delta > 2+\epsilon$. Under the same distributional model, the $k$-means LP relaxation fails to recover such clusters at separation as large as $\Delta = 4$. Yet, if we enforce PSD constraints on the $k$-means LP, we get exact cluster recovery at center separation $\Delta > 2\sqrt2(1+\sqrt{1/m})$. In contrast, common heuristics such as Lloyd's algorithm (a.k.a. the $k$-means algorithm) can fail to recover clusters in this setting; even with arbitrarily large cluster separation, k-means++ with overseeding by any constant factor fails with high probability at exact cluster recovery. To complement the theoretical analysis, we provide an experimental study of the recovery guarantees for these various methods, and discuss several open problems which these experiments suggest.