Goto

Collaborating Authors

 isometry





The Unbalanced Gromov Wasserstein Distance: Conic Formulation and Relaxation

Neural Information Processing Systems

Comparing metric measure spaces (i.e. a metric space endowed with a probability distribution) is at the heart of many machine learning problems. The most popular distance between such metric measure spaces is the Gromov-Wasserstein (GW) distance, which is the solution of a quadratic assignment problem. The GW distance is however limited to the comparison of metric measure spaces endowed with a \emph{probability} distribution. To alleviate this issue, we introduce two Unbalanced Gromov-Wasserstein formulations: a distance and a more tractable upper-bounding relaxation. They both allow the comparison of metric spaces equipped with arbitrary positive measures up to isometries.


Nearly Isometric Embedding by Relaxation

James McQueen, Marina Meila, Dominique Joncas

Neural Information Processing Systems

Many manifold learning algorithms aim to create embeddings with low or no distortion (isometric). If the data has intrinsic dimension d, it is often impossible to obtain an isometric embedding in d dimensions, but possible in s > d dimensions. Y et, most geometry preserving algorithms cannot do the latt er. This paper proposes an embedding algorithm to overcome this. The algorith m accepts as input, besides the dimension d, an embedding dimension s d . For any data embedding Y, we compute a Loss( Y), based on the push-forward Riemannian metric associated with Y, which measures deviation of Y from from isometry. Riemannian Relaxation iteratively updates Y in order to decrease Loss( Y) . The experiments confirm the superiority of our algorithm in obtaining low dis tortion embeddings.



A Implementation Details

Neural Information Processing Systems

With tangent space optimization, we can use standard Euclidean optimization techniques, and respect the geometry of the manifold. All experiments were run on Intel Cascade Lake CPUs, with microprocessors Intel Xeon Gold 6230 (20 Cores, 40 Threads, 2.1 GHz, 28MB Cache, 125W TDP). The red dot corresponds to the relation addition R . Datasets: Stats about the datasets used in Knowledge graph experiments can be found in Table 4. Results: In addition to the results provided in 6.1, in Table 5 we provide a comparison with other We include ComplEx [77], Tucker [9], and Quaternion [92]. In Figure 6 we add equivalent plots to the ones explained in 6.4 for other relations from Same grid search is applied to baselines.