Goto

Collaborating Authors

 weakness








exp(f(C i, C

Neural Information Processing Systems

We sincerely thank all reviewers for their valuable comments. Our responses to the comments are listed below. If we understand it correctly, "the approximate posterior in If so, we do consider the effects of the KL term in the proof. Dir(β), which is a Dirichlet posterior with a Dirichlet prior Dir(α). Does Claim 4.1 rely on a specific distribution (q1 under Weaknesses)?




MGNNI: Multiscale Graph Neural Networks with Implicit Layers

Neural Information Processing Systems

Recently, implicit graph neural networks (GNNs) have been proposed to capture long-range dependencies in underlying graphs. In this paper, we introduce and justify two weaknesses of implicit GNNs: the constrained expressiveness due to their limited effective range for capturing long-range dependencies, and their lack of ability to capture multiscale information on graphs at multiple resolutions. To show the limited effective range of previous implicit GNNs, we first provide a theoretical analysis and point out the intrinsic relationship between the effective range and the convergence of iterative equations used in these models. To mitigate the mentioned weaknesses, we propose a multiscale graph neural network with implicit layers (MGNNI) which is able to model multiscale structures on graphs and has an expanded effective range for capturing long-range dependencies. We conduct comprehensive experiments for both node classification and graph classification to show that MGNNI outperforms representative baselines and has a better ability for multiscale modeling and capturing of long-range dependencies.