A Limitations Our results and analysis on the graph tokenizer and graph decoder are confined to the task of MGM

Neural Information Processing Systems 

Firstly, SGTs ( i.e., simple GNNs) are still powerful and can "distinguish almost all non-isomorphic graphs" [ VQ-V AE (Table 3b) emphasizes the impact of pretraining methods on the tokenizer's performance. We leave the investigation of how to effectively pretrain GNN-based tokenizers as future works. We have included the literature review of MGM in the main body of the paper. However, a closer inspection reveals several critical distinctions between MGM and these methods. Finally, MGM employs remask decoding to constrain the encoder's ability on This code uses a single-layer SGT of GIN as an example.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found