Stochastic Aggregation in Graph Neural Networks
Wang, Yuanqing, Karaletsos, Theofanis
–arXiv.org Artificial Intelligence
We herein present a unifying framework for stochastic aggregation (STAG) in GNNs, where noise is (adaptively) Nonetheless, such aggregation scheme also causes limitations injected into the aggregation process from of GNNs. Firstly, without proper choices of aggregation the neighborhood to form node embeddings. We functions, GNNs are not always as powerful as WL provide theoretical arguments that STAG models, test. When pooling from (transformed) neighborhood representations, with little overhead, remedy both of the aforementioned if the underlying set for the neighborhood problems. In addition to fixed-noise multiset (See Definition 1 of Xu et al. (2018)) is countable, models, we also propose probabilistic versions of as has been studied in detail in Xu et al. (2018), although STAG models and a variational inference framework different multiset functions learn different attributes of the to learn the noise posterior. We conduct illustrative neighborhood--MAX learns distinct elements and MEAN experiments clearly targeting oversmoothing learns distributions--only SUM is injective and thus capable and multiset aggregation limitations.
arXiv.org Artificial Intelligence
Feb-25-2021
- Country:
- North America > United States > New York > New York County > New York City (0.14)
- Genre:
- Research Report (0.50)
- Technology: