Goto

Collaborating Authors

 Kothapalli, Abihith


Linear Partial Gromov-Wasserstein Embedding

arXiv.org Artificial Intelligence

The Gromov Wasserstein (GW) problem, a variant of the classical optimal transport (OT) problem, has attracted growing interest in the machine learning and data science communities due to its ability to quantify similarity between measures in different metric spaces. However, like the classical OT problem, GW imposes an equal mass constraint between measures, which restricts its application in many machine learning tasks. To address this limitation, the partial Gromov-Wasserstein (PGW) problem has been introduced, which relaxes the equal mass constraint, enabling the comparison of general positive Radon measures. Despite this, both GW and PGW face significant computational challenges due to their non-convex nature. To overcome these challenges, we propose the linear partial Gromov-Wasserstein (LPGW) embedding, a linearized embedding technique for the PGW problem. For $K$ different metric measure spaces, the pairwise computation of the PGW distance requires solving the PGW problem $\mathcal{O}(K^2)$ times. In contrast, the proposed linearization technique reduces this to $\mathcal{O}(K)$ times. Similar to the linearization technique for the classical OT problem, we prove that LPGW defines a valid metric for metric measure spaces. Finally, we demonstrate the effectiveness of LPGW in practical applications such as shape retrieval and learning with transport-based embeddings, showing that LPGW preserves the advantages of PGW in partial matching while significantly enhancing computational efficiency.


Stereographic Spherical Sliced Wasserstein Distances

arXiv.org Artificial Intelligence

Applications involving distributions defined on a hypersphere are remarkably diverse, highlighting the importance of spherical geometries across various disciplines. These applications include: 1) mapping the distribution of geographic or geological features on celestial bodies, such as stars and planets [39, 8, 60], 2) magnetoencephalography (MEG) imaging [75] in medical domains, 3) spherical image representations and 360 images [13, 38], such as omnidirectional images in computer vision [40], 4) texture mapping in computer graphics [24, 21], and more recently, 5) deep representation learning, where the latent representation is often mapped to a bounded space, commonly a sphere, where cosine similarity is utilized for effective representation learning [11, 76]. The analysis of distributions on hyperspheres is traditionally approached through directional statistics, also referred to as circular/spherical statistics [37, 52, 50, 61]. This specialized field is dedicated to the statistical analysis of directions, orientations, and rotations. More recently, with the growing application of optimal transport theory [74, 62] in machine learning, due in part to its favorable statistical, geometrical, and topological properties, there has been an increasing interest in using optimal transport to compare spherical probability measures [14, 32]. One of the main bottlenecks in optimal transport theory is its high computational cost, generally of cubic complexity.


Equivariant vs. Invariant Layers: A Comparison of Backbone and Pooling for Point Cloud Classification

arXiv.org Artificial Intelligence

Learning from set-structured data, such as point clouds, has gained significant attention from the community. Geometric deep learning provides a blueprint for designing effective set neural networks by incorporating permutation symmetry. Of our interest are permutation invariant networks, which are composed of a permutation equivariant backbone, permutation invariant global pooling, and regression/classification head. While existing literature has focused on improving permutation equivariant backbones, the impact of global pooling is often overlooked. In this paper, we examine the interplay between permutation equivariant backbones and permutation invariant global pooling on three benchmark point cloud classification datasets. Our findings reveal that: 1) complex pooling methods, such as transport-based or attention-based poolings, can significantly boost the performance of simple backbones, but the benefits diminish for more complex backbones, 2) even complex backbones can benefit from pooling layers in low data scenarios, 3) surprisingly, the choice of pooling layers can have a more significant impact on the model's performance than adjusting the width and depth of the backbone, and 4) pairwise combination of pooling layers can significantly improve the performance of a fixed backbone. Our comprehensive study provides insights for practitioners to design better permutation invariant set neural networks.


Learning-Based Heuristic for Combinatorial Optimization of the Minimum Dominating Set Problem using Graph Convolutional Networks

arXiv.org Artificial Intelligence

These optimization problems offer a means to model highly intricate discrete decision problems across diverse domains where pairwise interactions play a crucial role, such as social network analysis [1], wireless communications [2], operations research [3], scheduling [4], and transportation [5]. A considerable portion of these problems belongs to the broader class of NP-hard problems, where it is challenging to find exact solutions, as doing so often necessitates a near-complete enumeration of the entire search space. Consequently, computation of exact solutions is practically infeasible, and approximation or heuristic algorithms are generally favored for practical applications. Although these algorithms exhibit significantly faster runtime and possess sub-exponential theoretical complexities, they often yield suboptimal solutions. Therefore, a key area of research revolves around the development of approximation or heuristic algorithms that can provide solutions that are as close to optimal as possible. The minimum dominating set (MDS) problem is an important network-based optimization problem that involves finding the smallest dominating set of a given graph. A dominating set of a graph is a subset of the vertices in the graph such that every vertex is either in the dominating set or adjacent to a vertex in the dominating set. The MDS problem aims to find the dominating set of minimum cardinality. Dominating sets have a wide range of applications in various fields, including social networks [6-8], cybersecurity [9], biological networks [10], bioinformatics [11], multi-document summarization [12], and wireless sensor networks [13]