fastgcn
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.14)
- North America > United States > Alaska > Anchorage Municipality > Anchorage (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- (4 more...)
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- North America > United States > Alaska > Anchorage Municipality > Anchorage (0.04)
- (6 more...)
To Reviewer # 1: Thanks for your positive comments on our paper!
Q1: Why is Full-Batch outperformed by LADIES? It is true that LADIES is designed as an approximation of original GCN. The reason is: real graphs are often noisy and incomplete. Figure 1: Experiments on the PubMed dataset, which contains 19717 nodes and 44338 edges. Q1: "Similar names generate several misunderstandings and confusions" We apologize the title causes confusing.
Calibrate and Debias Layer-wise Sampling for Graph Convolutional Networks
Chen, Yifan, Xu, Tianning, Hakkani-Tur, Dilek, Jin, Di, Yang, Yun, Zhu, Ruoqing
Multiple sampling-based methods have been developed for approximating and accelerating node embedding aggregation in graph convolutional networks (GCNs) training. Among them, a layer-wise approach recursively performs importance sampling to select neighbors jointly for existing nodes in each layer. This paper revisits the approach from a matrix approximation perspective, and identifies two issues in the existing layer-wise sampling methods: suboptimal sampling probabilities and estimation biases induced by sampling without replacement. To address these issues, we accordingly propose two remedies: a new principle for constructing sampling probabilities and an efficient debiasing algorithm. The improvements are demonstrated by extensive analyses of estimation variance and experiments on common benchmarks.
- North America > United States > California > Los Angeles County > Long Beach (0.14)
- North America > Canada > Quebec > Montreal (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- (9 more...)
MG-GCN: Fast and Effective Learning with Mix-grained Aggregators for Training Large Graph Convolutional Networks
Huang, Tao, Zhang, Yihan, Wu, Jiajing, Fang, Junyuan, Zheng, Zibin
Graph convolutional networks (GCNs) have been employed as a kind of significant tool on many graph-based applications recently. Inspired by convolutional neural networks (CNNs), GCNs generate the embeddings of nodes by aggregating the information of their neighbors layer by layer. However, the high computational and memory cost of GCNs due to the recursive neighborhood expansion across GCN layers makes it infeasible for training on large graphs. To tackle this issue, several sampling methods during the process of information aggregation have been proposed to train GCNs in a mini-batch Stochastic Gradient Descent (SGD) manner. Nevertheless, these sampling strategies sometimes bring concerns about insufficient information collection, which may hinder the learning performance in terms of accuracy and convergence. To tackle the dilemma between accuracy and efficiency, we propose to use aggregators with different granularities to gather neighborhood information in different layers. Then, a degree-based sampling strategy, which avoids the exponential complexity, is constructed for sampling a fixed number of nodes. Combining the above two mechanisms, the proposed model, named Mix-grained GCN (MG-GCN) achieves state-of-the-art performance in terms of accuracy, training speed, convergence speed, and memory cost through a comprehensive set of experiments on four commonly used benchmark datasets and a new Ethereum dataset.
- North America > United States > California > San Francisco County > San Francisco (0.28)
- North America > United States > California > Los Angeles County > Long Beach (0.14)
- Asia > China > Guangdong Province > Guangzhou (0.05)
- (10 more...)
- Information Technology > Security & Privacy (0.46)
- Banking & Finance > Trading (0.37)
Urban Traffic Flow Forecast Based on FastGCRNN
Zhang, Ya, Lu, Mingming, Li, Haifeng
Traffic forecasting is an important prerequisite for the application of intelligent transportation systems in urban traffic networks. The existing works adopted RNN and CNN/GCN, among which GCRN is the state of art work, to characterize the temporal and spatial correlation of traffic flows. However, it is hard to apply GCRN to the large scale road networks due to high computational complexity. To address this problem, we propose to abstract the road network into a geometric graph and build a Fast Graph Convolution Recurrent Neural Network (FastGCRNN) to model the spatial-temporal dependencies of traffic flow. Specifically, We use FastGCN unit to efficiently capture the topological relationship between the roads and the surrounding roads in the graph with reducing the computational complexity through importance sampling, combine GRU unit to capture the temporal dependency of traffic flow, and embed the spatiotemporal features into Seq2Seq based on the Encoder-Decoder framework. Experiments on large-scale traffic data sets illustrate that the proposed method can greatly reduce computational complexity and memory consumption while maintaining relatively high accuracy.
- Asia > China > Guangdong Province > Shenzhen (0.05)
- North America > Trinidad and Tobago > Trinidad > Arima > Arima (0.05)
- Asia > China > Hunan Province (0.04)
- Asia > India (0.04)
- Transportation > Ground > Road (1.00)
- Consumer Products & Services > Travel (1.00)
- Transportation > Infrastructure & Services (0.94)
Layer-Dependent Importance Sampling for Training Deep and Large Graph Convolutional Networks
Zou, Difan, Hu, Ziniu, Wang, Yewen, Jiang, Song, Sun, Yizhou, Gu, Quanquan
Graph convolutional networks (GCNs) have recently received wide attentions, due to their successful applications in different graph tasks and different domains. Training GCNs for a large graph, however, is still a challenge. Original full-batch GCN training requires calculating the representation of all the nodes in the graph per GCN layer, which brings in high computation and memory costs. To alleviate this issue, several sampling-based methods have been proposed to train GCNs on a subset of nodes. Among them, the node-wise neighbor-sampling method recursively samples a fixed number of neighbor nodes, and thus its computation cost suffers from exponential growing neighbor size; while the layer-wise importance-sampling method discards the neighbor-dependent constraints, and thus the nodes sampled across layer suffer from sparse connection problem. To deal with the above two problems, we propose a new effective sampling algorithm called LAyer-Dependent ImportancE Sampling (LADIES). Based on the sampled nodes in the upper layer, LADIES selects their neighborhood nodes, constructs a bipartite subgraph and computes the importance probability accordingly. Then, it samples a fixed number of nodes by the calculated probability, and recursively conducts such procedure per layer to construct the whole computation graph. We prove theoretically and experimentally, that our proposed sampling algorithm outperforms the previous sampling methods in terms of both time and memory costs. Furthermore, LADIES is shown to have better generalization accuracy than original full-batch GCN, due to its stochastic nature.
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- North America > United States > Alaska > Anchorage Municipality > Anchorage (0.04)
- (6 more...)