Goto

Collaborating Authors

 barycenter graph




To Reviewer # 1

Neural Information Processing Systems

We thank all the reviewers for their constructive feedback. Below we provide specific responses to each reviewer. We will add more results in the paper. In the following response 2, we further highlight our important improvements ignored by existing work. The Method In Fig.1(e), Tables 4 and 5, S-GWL can be slightly worse than GWL on node correctness.


Scalable Gromov-Wasserstein Learning for Graph Partitioning and Matching

Xu, Hongteng, Luo, Dixin, Carin, Lawrence

arXiv.org Machine Learning

We propose a scalable Gromov-Wasserstein learning (S-GWL) method and establish a novel and theoretically-supported paradigm for large-scale graph analysis. The proposed method is based on the fact that Gromov-Wasserstein discrepancy is a pseudometric on graphs. Given two graphs, the optimal transport associated with their Gromov-Wasserstein discrepancy provides the correspondence between their nodes and achieves graph matching. When one of the graphs has isolated but self-connected nodes ($i.e.$, a disconnected graph), the optimal transport indicates the clustering structure of the other graph and achieves graph partitioning. Using this concept, we extend our method to multi-graph partitioning and matching by learning a Gromov-Wasserstein barycenter graph for multiple observed graphs; the barycenter graph plays the role of the disconnected graph, and since it is learned, so is the clustering. Our method combines a recursive $K$-partition mechanism with a regularized proximal gradient algorithm, whose time complexity is $\mathcal{O}(K(E+V)\log_K V)$ for graphs with $V$ nodes and $E$ edges. To our knowledge, our method is the first attempt to make Gromov-Wasserstein discrepancy applicable to large-scale graph analysis and unify graph partitioning and matching into the same framework. It outperforms state-of-the-art graph partitioning and matching methods, achieving a trade-off between accuracy and efficiency.