rapidgnn
RapidGNN: Energy and Communication-Efficient Distributed Training on Large-Scale Graph Neural Networks
Niam, Arefin, Kosar, Tevfik, Nine, M S Q Zulkar
Graph Neural Networks (GNNs) have become popular across a diverse set of tasks in exploring structural relationships between entities. However, due to the highly connected structure of the datasets, distributed training of GNNs on large-scale graphs poses significant challenges. Traditional sampling-based approaches mitigate the computational loads, yet the communication overhead remains a challenge. This paper presents RapidGNN, a distributed GNN training framework with deterministic sampling-based scheduling to enable efficient cache construction and prefetching of remote features. Evaluation on benchmark graph datasets demonstrates RapidGNN's effectiveness across different scales and topologies. RapidGNN improves end-to-end training throughput by 2.46x to 3.00x on average over baseline methods across the benchmark datasets, while cutting remote feature fetches by over 9.70x to 15.39x. RapidGNN further demonstrates near-linear scalability with an increasing number of computing units efficiently. Furthermore, it achieves increased energy efficiency over the baseline methods for both CPU and GPU by 44% and 32%, respectively.
- North America > United States > Missouri > St. Louis County > St. Louis (0.05)
- North America > United States > Tennessee > Putnam County > Cookeville (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > New York > Erie County > Buffalo (0.04)