GraphTranslator: Aligning Graph Model to Large Language Model for Open-ended Tasks
Zhang, Mengmei, Sun, Mingwei, Wang, Peng, Fan, Shen, Mo, Yanhu, Xu, Xiaoxiao, Liu, Hong, Yang, Cheng, Shi, Chuan
–arXiv.org Artificial Intelligence
Large language models (LLMs) like ChatGPT, exhibit powerful zero-shot and instruction-following capabilities, have catalyzed a revolutionary transformation across diverse research fields of artificial intelligence, especially for open-ended tasks. While the idea is less explored in the graph domain, despite the availability of numerous powerful graph models (GMs), they are restricted to tasks in a pre-defined form. Although several methods applying LLMs to graphs have been proposed, they fail to simultaneously handle the pre-defined and open-ended tasks, with LLM as a node feature enhancer or as a standalone predictor. To break this dilemma, we propose to bridge the pretrained GM and LLM by a Translator, named GraphTranslator, aiming to leverage GM to handle the pre-defined tasks effectively and utilize the extended interface of LLMs to offer various open-ended tasks for GM. To train such Translator, we propose a Producer capable of constructing the graph-text alignment data along node information, neighbor information and model information. By treating the node representation as a type of language, the proposed GraphTranslator empowers an LLM to make predictions based on node representation and language instructions, providing a unified perspective for both pre-defined and open-ended tasks. Extensive results show that the proposed GraphTranslator effectively improves the results of zero-shot node classification. The graph question answering experiments reveal our GraphTranslator potential across a broad spectrum of open-ended applications through language instructions.
arXiv.org Artificial Intelligence
Feb-11-2024
- Country:
- Asia (0.30)
- Genre:
- Research Report > New Finding (0.66)
- Industry:
- Information Technology (0.68)
- Technology: