isometry
- North America > United States > New York > Suffolk County > Stony Brook (0.04)
- Asia > Middle East > Jordan (0.04)
- Asia > Middle East > Iran > Tehran Province > Tehran (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- South America > Chile (0.04)
- Europe > Greece (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Supervised Learning > Representation Of Examples (0.34)
The Unbalanced Gromov Wasserstein Distance: Conic Formulation and Relaxation
Comparing metric measure spaces (i.e. a metric space endowed with a probability distribution) is at the heart of many machine learning problems. The most popular distance between such metric measure spaces is the Gromov-Wasserstein (GW) distance, which is the solution of a quadratic assignment problem. The GW distance is however limited to the comparison of metric measure spaces endowed with a \emph{probability} distribution. To alleviate this issue, we introduce two Unbalanced Gromov-Wasserstein formulations: a distance and a more tractable upper-bounding relaxation. They both allow the comparison of metric spaces equipped with arbitrary positive measures up to isometries.
Nearly Isometric Embedding by Relaxation
James McQueen, Marina Meila, Dominique Joncas
Many manifold learning algorithms aim to create embeddings with low or no distortion (isometric). If the data has intrinsic dimension d, it is often impossible to obtain an isometric embedding in d dimensions, but possible in s > d dimensions. Y et, most geometry preserving algorithms cannot do the latt er. This paper proposes an embedding algorithm to overcome this. The algorith m accepts as input, besides the dimension d, an embedding dimension s d . For any data embedding Y, we compute a Loss( Y), based on the push-forward Riemannian metric associated with Y, which measures deviation of Y from from isometry. Riemannian Relaxation iteratively updates Y in order to decrease Loss( Y) . The experiments confirm the superiority of our algorithm in obtaining low dis tortion embeddings.
- North America > United States > Washington > King County > Seattle (0.04)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
- Asia > Middle East > Jordan (0.04)
- Europe > Latvia > Lubāna Municipality > Lubāna (0.04)
- Asia > Japan > Honshū > Tōhoku > Fukushima Prefecture > Fukushima (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- (2 more...)
A Implementation Details
With tangent space optimization, we can use standard Euclidean optimization techniques, and respect the geometry of the manifold. All experiments were run on Intel Cascade Lake CPUs, with microprocessors Intel Xeon Gold 6230 (20 Cores, 40 Threads, 2.1 GHz, 28MB Cache, 125W TDP). The red dot corresponds to the relation addition R . Datasets: Stats about the datasets used in Knowledge graph experiments can be found in Table 4. Results: In addition to the results provided in 6.1, in Table 5 we provide a comparison with other We include ComplEx [77], Tucker [9], and Quaternion [92]. In Figure 6 we add equivalent plots to the ones explained in 6.4 for other relations from Same grid search is applied to baselines.
- North America > United States > New York > Suffolk County > Stony Brook (0.04)
- Asia > Middle East > Jordan (0.04)
- Asia > Middle East > Iran > Tehran Province > Tehran (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Europe > Latvia > Lubāna Municipality > Lubāna (0.04)
- Asia > Japan > Honshū > Tōhoku > Fukushima Prefecture > Fukushima (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- (2 more...)
- South America > Chile (0.04)
- Europe > Greece (0.04)