heterophily
690d83983a63aa1818423fd6edd3bfdb-AuthorFeedback.pdf
We thank the reviewers for their time and valuable feedback. Below, we clarify several important points raised by the reviewers. An extra page in the final version will allow us to include the requested details. We believe these clarifications, together with new analyses, resolve all key issues raised. Rep'16] and provides a highly constraining measure of local topology.
R3, R4] find the paper clear; most reviewers [R2, R3, R4] find the problem of overcoming the implicit homophily
We thank the reviewers for their thoughtful and constructive feedback. While we only address major discussion points here, we will incorporate all feedback in the final version. We'll revise our paper to clarify the scope of our contributions. While we agree that the designs are not new, our analysis for the heterophily setting is novel. We'll make this more clear.
Scalable Heterophilous Graph Neural Network with Decoupled Embeddings
Heterophilous Graph Neural Network (GNN) is a family of GNNs that specializes in learning graphs under heterophily, where connected nodes tend to have different labels. Most existing heterophilous models incorporate iterative non-local computations to capture node relationships. However, these approaches have limited application to large-scale graphs due to their high computational costs and challenges in adopting minibatch schemes.
From Trainable Negative Depth to Edge Heterophily in Graphs
Finding the proper depth d of a graph convolutional network (GCN) that provides strong representation ability has drawn significant attention, yet nonetheless largely remains an open problem for the graph learning community. Although noteworthy progress has been made, the depth or the number of layers of a corresponding GCN is realized by a series of graph convolution operations, which naturally makes d a positive integer (d N+). An interesting question is whether breaking the constraint of N+ by making d a real number (d R) can bring new insights into graph learning mechanisms. In this work, by redefining GCN's depth d as a trainable parameter continuously adjustable within (, +), we open a new door of controlling its signal processing capability to model graph homophily/heterophily (nodes with similar/dissimilar labels/attributes tend to be inter-connected).