Sequential Ordering in Textual Descriptions: Impact on Spatial Perception Abilities of Large Language Models
Ge, Yuyao, Liu, Shenghua, Mei, Lingrui, Chen, Lizhe, Cheng, Xueqi
–arXiv.org Artificial Intelligence
In recent years, Large Language Models have reached state-of-the-art performance across multiple domains. However, the progress in the field of graph reasoning remains limited. Our work delves into this gap by thoroughly investigating graph reasoning with LLM. In this work, we reveal the impact of text sequence on LLM spatial understanding, finding that graph-descriptive text sequences significantly affect LLM reasoning performance on graphs. By altering the graph-descriptive text sequences, we enhance the performance of LLM from 42.22\% to 70\%. Furthermore, we evaluate the relationship between LLM performance and graph size, discovering that the reasoning performance of LLM does not monotonically decrease with the increase in graph size. Conclusively, we introduce the Scaled Graph Reasoning benchmark for assessing LLM performance across varied graph sizes.
arXiv.org Artificial Intelligence
Feb-11-2024