Goto

Collaborating Authors

 lp filter


The proposed LP filter is fundamentally different from previous weighted

Neural Information Processing Systems

Due to space constraints we only address major concerns; all suggestions will be included in the final version. Experimentally we've observed that when using previous weighted We will compare and cite related work (gTop-k) in the final draft. In sec.3 we assume min. SGD has a small critical batch size to approximate a full gradient descent iteration, no matter the size of dataset. Appendix-F shows ScaleCom's scalability in system performance; more Analogously, we perform filtering on the residual gradients (see eq.(5)) Connection will be discussed in the revised version.


The proposed LP filter is fundamentally different from previous weighted

Neural Information Processing Systems

Due to space constraints we only address major concerns; all suggestions will be included in the final version. Experimentally we've observed that when using previous weighted We will compare and cite related work (gTop-k) in the final draft. In sec.3 we assume min. SGD has a small critical batch size to approximate a full gradient descent iteration, no matter the size of dataset. Appendix-F shows ScaleCom's scalability in system performance; more Analogously, we perform filtering on the residual gradients (see eq.(5)) Connection will be discussed in the revised version.


Complete the Missing Half: Augmenting Aggregation Filtering with Diversification for Graph Convolutional Networks

Luan, Sitao, Zhao, Mingde, Hua, Chenqing, Chang, Xiao-Wen, Precup, Doina

arXiv.org Machine Learning

The core operation of current Graph Neural Networks (GNNs) is the aggregation enabled by the graph Laplacian or message passing, which filters the neighborhood node information. Though effective for various tasks, in this paper, we show that they are potentially a problematic factor underlying all GNN methods for learning on certain datasets, as they force the node representations similar, making the nodes gradually lose their identity and become indistinguishable. Hence, we augment the aggregation operations with their dual, i.e. diversification operators that make the node more distinct and preserve the identity. Such augmentation replaces the aggregation with a two-channel filtering process that, in theory, is beneficial for enriching the node representations. In practice, the proposed two-channel filters can be easily patched on existing GNN methods with diverse training strategies, including spectral and spatial (message passing) methods. In the experiments, we observe desired characteristics of the models and significant performance boost upon the baselines on 9 node classification tasks.