LLMs as Zero-shot Graph Learners: Alignment of GNN Representations with LLM Token Embeddings
–Neural Information Processing Systems
Zero-shot graph machine learning, especially with graph neural networks (GNNs), has garnered significant interest due to the challenge of scarce labeled data. While methods like self-supervised learning and graph prompt learning have been extensively explored, they often rely on fine-tuning with task-specific labels, limiting their effectiveness in zero-shot scenarios. Inspired by the zero-shot capabilities of instruction-fine-tuned large language models (LLMs), we introduce a novel framework named Token Embedding-Aligned Graph Language Model (TEA-GLM) that leverages LLMs as cross-dataset and cross-task zero-shot learners for graph machine learning.
Neural Information Processing Systems
May-28-2025, 10:03:39 GMT
- Genre:
- Research Report > Experimental Study (1.00)
- Industry:
- Banking & Finance (0.48)
- Information Technology (0.47)
- Technology: