Efficient Large Language Models Fine-Tuning On Graphs

Xue, Rui, Shen, Xipeng, Yu, Ruozhou, Liu, Xiaorui

arXiv.org Artificial Intelligence 

Learning from Text-Attributed Graphs (TAGs) has attracted significant attention due to its wide range of real-world applications. The rapid evolution of large language models (LLMs) has revolutionized the way we process textual data, which indicates a strong potential to replace shallow text embedding generally used in Graph Neural Networks (GNNs). However, we find that existing LLM approaches that exploit text information in graphs suffer from inferior computation and data efficiency. In this work, we introduce a novel and efficient approach for the end-toend fine-tuning of Large Language Models (LLMs) on TAGs, named LEADING. The proposed approach maintains computation cost and memory overhead comparable to the graph-less fine-tuning of LLMs. Moreover, it transfers the rick knowledge in LLMs to downstream graph learning tasks effectively with limited labeled data in semi-supervised learning. Its superior computation and data efficiency are demonstrated through comprehensive experiments, offering a promising solution for a wide range of LLMs and graph learning tasks on TAGs. Graph neural networks (GNNs) have been widely used for representation learning on graphstructured data (Hamilton, 2020; Ma & Tang, 2021), and they achieve promising state-of-the-art performance on various graph learning tasks, such as node classification, link prediction, and graph classification.