GITA: Graph to Visual and Textual Integration for Vision-Language Graph Reasoning
–Neural Information Processing Systems
Large Language Models (LLMs) are increasingly used for various tasks with graph structures. Though LLMs can process graph information in the textual format, they overlook the rich vision modality, which is an intuitive way for humans to comprehend structural information and conduct general graph reasoning. The potential benefits and capabilities of representing graph structures as visual images (i.e., visual graph) are still unexplored. To fill the gap, we innovatively propose an end-to-end framework, called Graph to vIsual and Textual IntegrAtion (GITA), which incorporates visual graphs into general graph reasoning. Besides, we construct the Graph-based Vision-Language Question Answering (GVLQA) dataset from existing graph data, which is the first vision-language dataset for general graph reasoning. Extensive experiments on the GVLQA dataset and five real-world datasets show that GITA outperforms mainstream LLMs on general graph reasoning. Moreover, experimental results demonstrate the effectiveness of the layout augmentation on visual graphs and pretraining on the GVLQA dataset.
Neural Information Processing Systems
May-28-2025, 06:04:05 GMT
- Country:
- North America > United States (0.46)
- Genre:
- Research Report > Experimental Study (0.93)
- Industry:
- Information Technology (0.92)
- Technology: