Goto

Collaborating Authors

 gnn design


Design Space for Graph Neural Networks

Neural Information Processing Systems

The rapid evolution of Graph Neural Networks (GNNs) has led to a growing number of new architectures as well as novel applications. However, current research focuses on proposing and evaluating specific architectural designs of GNNs, such as GCN, GIN, or GAT, as opposed to studying the more general design space of GNNs that consists of a Cartesian product of different design dimensions, such as the number of layers or the type of the aggregation function. Additionally, GNN designs are often specialized to a single task, yet few efforts have been made to understand how to quickly find the best GNN design for a novel task or a novel dataset. Here we define and systematically study the architectural design space for GNNs which consists of 315,000 different designs over 32 different predictive tasks. Our approach features three key innovations: (1) A general GNN design space; (2) a GNN task space with a similarity metric, so that for a given novel task/dataset, we can quickly identify/transfer the best performing architecture; (3) an efficient and effective design space evaluation method which allows insights to be distilled from a huge number of model-task combinations. Our key results include: (1) A comprehensive set of guidelines for designing well-performing GNNs; (2) while best GNN designs for different tasks vary significantly, the GNN task space allows for transferring the best designs across different tasks; (3) models discovered using our design space achieve state-of-the-art performance. Overall, our work offers a principled and scalable approach to transition from studying individual GNN designs for specific tasks, to systematically studying the GNN design space and the task space. Finally, we release GraphGym, a powerful platform for exploring different GNN designs and tasks.


New tasks

Neural Information Processing Systems

SGD optimizer is set with momentum of 0.9. This is consistent with the choice of GA T where additive attention is used. Figure B.1: Ranking analysis for a new GNN design dimension Attention 14 B.2 Case Study: Link Prediction as a New Type of T asks Our GNN design framework is applicable to other graph learning tasks beyond node or graph classification tasks. Here we include additional results for link prediction tasks. The extended task space is shown in Figure B.2. Figure B.2: GNN task space for a new type of GNN tasks Link Prediction In Table C.1, we show the best GNN design that we discover for each task in the main manuscript.



approach to study GNN designs, the first quantitative analysis for GNN task similarity, and offers rigorous findings via 2

Neural Information Processing Systems

We thank the reviewers for their constructive feedback. We thank R2 and R3 for raising that our paper lacks theoretical analysis. LU activation significantly improves GNN performance. We will add these new discussions to the revised paper. We thank reviewers for suggesting other design dimensions to explore.


New tasks

Neural Information Processing Systems

SGD optimizer is set with momentum of 0.9. This is consistent with the choice of GA T where additive attention is used. Figure B.1: Ranking analysis for a new GNN design dimension Attention 14 B.2 Case Study: Link Prediction as a New Type of T asks Our GNN design framework is applicable to other graph learning tasks beyond node or graph classification tasks. Here we include additional results for link prediction tasks. The extended task space is shown in Figure B.2. Figure B.2: GNN task space for a new type of GNN tasks Link Prediction In Table C.1, we show the best GNN design that we discover for each task in the main manuscript.



approach to study GNN designs, the first quantitative analysis for GNN task similarity, and offers rigorous findings via 2

Neural Information Processing Systems

We thank the reviewers for their constructive feedback. We thank R2 and R3 for raising that our paper lacks theoretical analysis. LU activation significantly improves GNN performance. We will add these new discussions to the revised paper. We thank reviewers for suggesting other design dimensions to explore.


Design Space for Graph Neural Networks

Neural Information Processing Systems

The rapid evolution of Graph Neural Networks (GNNs) has led to a growing number of new architectures as well as novel applications. However, current research focuses on proposing and evaluating specific architectural designs of GNNs, such as GCN, GIN, or GAT, as opposed to studying the more general design space of GNNs that consists of a Cartesian product of different design dimensions, such as the number of layers or the type of the aggregation function. Additionally, GNN designs are often specialized to a single task, yet few efforts have been made to understand how to quickly find the best GNN design for a novel task or a novel dataset. Here we define and systematically study the architectural design space for GNNs which consists of 315,000 different designs over 32 different predictive tasks. Our approach features three key innovations: (1) A general GNN design space; (2) a GNN task space with a similarity metric, so that for a given novel task/dataset, we can quickly identify/transfer the best performing architecture; (3) an efficient and effective design space evaluation method which allows insights to be distilled from a huge number of model-task combinations.


Design Space for Graph Neural Networks

You, Jiaxuan, Ying, Rex, Leskovec, Jure

arXiv.org Artificial Intelligence

The rapid evolution of Graph Neural Networks (GNNs) has led to a growing number of new architectures as well as novel applications. However, current research focuses on proposing and evaluating specific architectural designs of GNNs, such as GCN, GIN, or GAT, as opposed to studying the more general design space of GNNs that consists of a Cartesian product of different design dimensions, such as the number of layers or the type of the aggregation function. Additionally, GNN designs are often specialized to a single task, yet few efforts have been made to understand how to quickly find the best GNN design for a novel task or a novel dataset. Here we define and systematically study the architectural design space for GNNs which consists of 315,000 different designs over 32 different predictive tasks. Our approach features three key innovations: (1) A general GNN design space; (2) a GNN task space with a similarity metric, so that for a given novel task/dataset, we can quickly identify/transfer the best performing architecture; (3) an efficient and effective design space evaluation method which allows insights to be distilled from a huge number of model-task combinations. Our key results include: (1) A comprehensive set of guidelines for designing well-performing GNNs; (2) while best GNN designs for different tasks vary significantly, the GNN task space allows for transferring the best designs across different tasks; (3) models discovered using our design space achieve state-of-the-art performance. Overall, our work offers a principled and scalable approach to transition from studying individual GNN designs for specific tasks, to systematically studying the GNN design space and the task space. Finally, we release GraphGym, a powerful platform for exploring different GNN designs and tasks.