ugw
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Asia > China > Hong Kong (0.04)
A Background on unbalanced optimal transport
The conic formulation detailed in Section A.3 is obtained by performing the optimal transport on ( x, 0) Note that Liero et al. [2015] do not mention that this The proofs are detailed in Liero et al. [2015]. We first start with the existence of minimizers stated in Proposition 1. Thus it suffices to have relative compactness of the set of minimizers. There exists a Borel measurable bijection between the measures' supports It is the same proof as in the main body. We present in this section the proofs of the properties mentioned in Section 2. We refer to Section 2 In this section we frequently use the notion of marginal for neasures. We present in this section concepts and properties which are necessary for the proof of Theorem 1.
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Supervised Learning > Representation Of Examples (0.34)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Asia > China > Hong Kong (0.04)
A Background on unbalanced optimal transport
The conic formulation detailed in Section A.3 is obtained by performing the optimal transport on ( x, 0) Note that Liero et al. [2015] do not mention that this The proofs are detailed in Liero et al. [2015]. We first start with the existence of minimizers stated in Proposition 1. Thus it suffices to have relative compactness of the set of minimizers. There exists a Borel measurable bijection between the measures' supports It is the same proof as in the main body. We present in this section the proofs of the properties mentioned in Section 2. We refer to Section 2 In this section we frequently use the notion of marginal for neasures. We present in this section concepts and properties which are necessary for the proof of Theorem 1.
Outlier-Robust Gromov-Wasserstein for Graph Data
Kong, Lemin, Li, Jiajin, Tang, Jianheng, So, Anthony Man-Cho
Gromov-Wasserstein (GW) distance is a powerful tool for comparing and aligning probability distributions supported on different metric spaces. Recently, GW has become the main modeling technique for aligning heterogeneous data for a wide range of graph learning tasks. However, the GW distance is known to be highly sensitive to outliers, which can result in large inaccuracies if the outliers are given the same weight as other samples in the objective function. To mitigate this issue, we introduce a new and robust version of the GW distance called RGW. RGW features optimistically perturbed marginal constraints within a Kullback-Leibler divergence-based ambiguity set. To make the benefits of RGW more accessible in practice, we develop a computationally efficient and theoretically provable procedure using Bregman proximal alternating linearized minimization algorithm. Through extensive experimentation, we validate our theoretical results and demonstrate the effectiveness of RGW on real-world graph learning tasks, such as subgraph matching and partial shape correspondence.
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Asia > China > Hong Kong (0.04)
The Unbalanced Gromov Wasserstein Distance: Conic Formulation and Relaxation
Séjourné, Thibault, Vialard, François-Xavier, Peyré, Gabriel
Comparing metric measure spaces (i.e. a metric space endowed with a probability distribution) is at the heart of many machine learning problems. This includes for instance predicting properties of molecules in quantum chemistry or generating graphs with varying connectivity. The most popular distance between such metric measure spaces is the Gromov-Wasserstein (GW) distance, which is the solution of a quadratic assignment problem. This distance has been successfully applied to supervised learning and generative modeling, for applications as diverse as quantum chemistry or natural language processing. The GW distance is however limited to the comparison of metric measure spaces endowed with a \emph{probability} distribution. This strong limitation is problematic for many applications in ML where there is no a priori natural normalization on the total mass of the data. Furthermore, imposing an exact conservation of mass across spaces is not robust to outliers and often leads to irregular matching. To alleviate these issues, we introduce two Unbalanced Gromov-Wasserstein formulations: a distance and a more computationally tractable upper-bounding relaxation. They both allow the comparison of metric spaces equipped with arbitrary positive measures up to isometries.
- Information Technology > Artificial Intelligence > Machine Learning > Supervised Learning > Representation Of Examples (0.54)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Optimization (0.48)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.46)