A More about the K-hop kernel and K-hop message passing Rooted subtree of with 2-layer 1-hop Input Graph
–Neural Information Processing Systems
Figure 3: The rooted subtree of node v1 with 1-hop message passing and K-hop message passing. Here we assume that K = 2 and the number of layers is 2. In this section, we further discuss two different types of K-hop kernel and K-hop message passing. A.1 More about K-hop kernel First, recall the shortest path distance kernel and graph diffusion kernel defined in Definition 1 and 2. Given two definitions, the first thing we can conclude is that the K-hop neighbors of node v under two different kernels will be the same, namely N A.2 More about K-hop message passing Here, we use an example shown in Figure 3 to illustrate how K-hop message passing works and compare it with 1-hop message passing. The input graph is shown on the left top of the figure. Suppose we want to learn the representation of node v1 using 2 layer message passing GNNs. First, if we perform 1-hop message passing, it will encode a 2-height rooted subtree, which is shown on the right top of the figure. Note that each node is learned using the same set of parameters, which is indicated by filling each node with the same color (white in the figure). Now, we consider performing a 2-hop message passing GNN with the shortest path distance kernel. The rooted subtree of node v1 is shown in the middle of the figure. Furthermore, different sets of parameters are used for different hops, which is indicated by filling nodes in the different hops with different colors (blue for 1st hop and yellow for 2nd hop). Finally, at the bottom of the figure, we show the 2-hop message passing GNN with graph diffusion kernel. It is easy to see the rooted subtree is different from the one that uses the shortest path distance kernel, as nodes can appear in both the 1st hop and 2nd hop of neighbors.
Neural Information Processing Systems
Mar-19-2025, 05:20:30 GMT
- Technology: