Goto

Collaborating Authors

 pretrain




A Limitations Our results and analysis on the graph tokenizer and graph decoder are confined to the task of MGM

Neural Information Processing Systems

Firstly, SGTs ( i.e., simple GNNs) are still powerful and can "distinguish almost all non-isomorphic graphs" [ VQ-V AE (Table 3b) emphasizes the impact of pretraining methods on the tokenizer's performance. We leave the investigation of how to effectively pretrain GNN-based tokenizers as future works. We have included the literature review of MGM in the main body of the paper. However, a closer inspection reveals several critical distinctions between MGM and these methods. Finally, MGM employs remask decoding to constrain the encoder's ability on This code uses a single-layer SGT of GIN as an example.



PRODIGY: Enabling In-context Learning Over Graphs

Neural Information Processing Systems

While large language models have demonstrated this ability, how in-context learning could be performed over graphs is unexplored. In this paper, we develop Pr etraining O ver D iverse I n-Context G raph S y stems (PRODIGY), the first pretraining framework that enables in-context learning over graphs.






Neural Data Transformer 2: Multi-context Pretraining for Neural Spiking Activity Joel Y e

Neural Information Processing Systems

In this work we focus on one primary use case: neuroprosthetics powered by intracortical brain computer interfaces (iBCIs). With electrical recordings of just dozens to hundreds of channels of neuronal population spiking activity, today's iBCIs can relate this observed neural activity to behavioral intent, achieving impressive milestones such as high-speed speech decoding [