Towards Principled Graph Transformers
Müller, Luis, Kusuma, Daniel, Morris, Christopher
–arXiv.org Artificial Intelligence
Graph Neural Networks (GNNs) are the de-facto standard in graph learning [Kipf and Welling, 2017, Gilmer et al., 2017, Scarselli et al., 2009, Xu et al., 2019] but suffer from limited expressivity in distinguishing non-isomorphic graphs in terms of the 1-dimensional Weisfeiler-Leman algorithm (1-WL) [Morris et al., 2019, Xu et al., 2019]. Hence, recent works introduced higher-order GNNs, aligned with the k-dimensional Weisfeiler-Leman (k-WL) hierarchy for graph isomorphism testing [Azizian and Lelarge, 2021, Morris et al., 2019, 2020, 2022], resulting in more expressivity with an increase in k > 1. The k-WL hierarchy draws from a rich history in graph theory [Babai, 1979, Babai and Kucera, 1979, Babai et al., 1980, Cai et al., 1992, Weisfeiler and Leman, 1968], offering a deep theoretical understanding of k-WL-aligned GNNs. While theoretically intriguing, higher-order GNNs often fail to deliver state-of-the-art performance on real-world problems, making theoretically grounded models less relevant in practice [Azizian and Lelarge, 2021, Morris et al., 2020, 2022]. In contrast, graph transformers [Glickman and Yahav, 2023, He et al., 2023, Ma et al., 2023, Rampášek et al., 2022, Ying et al., 2021] recently demonstrated state-of-the-art empirical performance. However, they draw their expressive power mostly from positional/structural encodings (PEs), making it difficult to understand these models in terms of an expressivity hierarchy such as the k-WL. While a few works theoretically aligned graph transformers with the k-WL hierarchy [Kim et al., 2021, 2022, Zhang et al., 2023], we are not aware of any works
arXiv.org Artificial Intelligence
Feb-6-2024
- Country:
- North America
- Canada > Ontario
- Toronto (0.14)
- United States (0.28)
- Canada > Ontario
- North America
- Genre:
- Research Report (1.00)
- Industry:
- Education (0.46)
- Technology: