LLM-based Discriminative Reasoning for Knowledge Graph Question Answering

Xu, Mufan, Chen, Kehai, Bai, Xuefeng, Yang, Muyun, Zhao, Tiejun, Zhang, Min

arXiv.org Artificial Intelligence 

Large language models (LLMs) based on generative pre-trained Transformer have achieved remarkable performance on knowledge graph question-answering (KGQA) tasks. However, LLMs often produce ungrounded subgraph planning or reasoning results in KGQA due to the hallucinatory behavior brought by the generative paradigm, which may hinder the advancement of the LLM-based KGQA model. To deal with the issue, we propose a novel LLM-based Discriminative Reasoning (LDR) method to explicitly model the subgraph retrieval and answer inference process. By adopting discriminative strategies, the proposed LDR method not only enhances the capability of LLMs to retrieve question-related subgraphs but also alleviates the issue of ungrounded reasoning brought by the generative paradigm of LLMs. Experimental results show that the proposed approach outperforms multiple strong Figure 1: Example of previous generation-based comparison methods, along with achieving methods and discriminative method proposed in this state-of-the-art performance on two widely paper.