Revisiting the Graph Reasoning Ability of Large Language Models: Case Studies in Translation, Connectivity and Shortest Path

Dai, Xinnan, Wen, Qihao, Shen, Yifei, Wen, Hongzhi, Li, Dongsheng, Tang, Jiliang, Shan, Caihua

arXiv.org Artificial Intelligence 

Large Language Models (LLMs) have achieved great success in various reasoning tasks. In this work, we focus on the graph reasoning ability of LLMs. Although theoretical studies proved that LLMs are capable of handling graph reasoning tasks, empirical evaluations reveal numerous failures. To deepen our understanding on this discrepancy, we revisit the ability of LLMs on three fundamental graph tasks: graph description translation, graph connectivity, and the shortest-path problem. Our findings suggest that LLMs can fail to understand graph structures through text descriptions and exhibit varying performance for all these three fundamental Figure 1: The overview of datasets in accuracy and distribution tasks. Meanwhile, we perform a realworld across different connectivity types. We evaluate investigation on knowledge graphs and GPT-3 on determining whether a path exists between make consistent observations with our findings.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found