Chain-of-Knowledge: Grounding Large Language Models via Dynamic Knowledge Adapting over Heterogeneous Sources
Li, Xingxuan, Zhao, Ruochen, Chia, Yew Ken, Ding, Bosheng, Joty, Shafiq, Poria, Soujanya, Bing, Lidong
–arXiv.org Artificial Intelligence
It results in more factual rationales and reduced hallucination in generation. Specifically, CoK consists of three stages: reasoning preparation, dynamic knowledge adapting, and answer consolidation. Given a knowledge-intensive question, CoK first prepares several preliminary rationales and answers while identifying the relevant knowledge domains. If there is no majority consensus among the answers from samples, CoK corrects the rationales step by step by adapting knowledge from the identified domains. These corrected rationales can plausibly serve as a better foundation for the final answer consolidation. Unlike prior studies that primarily use unstructured data, CoK also leverages structured knowledge sources such as Wikidata and tables that provide more reliable factual information. To access both unstructured and structured knowledge sources in the dynamic knowledge adapting stage, we propose an adaptive query generator that allows the generation of queries for various types of query languages, including SPARQL, SQL, and natural sentences. Moreover, to minimize error propagation between rationales, CoK corrects the rationales progressively using preceding corrected rationales to generate and correct subsequent rationales. Extensive experiments show that CoK consistently improves the performance of LLMs on knowledge-intensive tasks across different domains. In recent years, large language models (LLMs) such as ChatGPT (OpenAI, 2023) have demonstrated impressive language generation capabilities (Cheng et al., 2023; Ding et al., 2023). However, one major challenge of LLMs lies in hallucination, which is their tendency to confidently generate plausible but factually incorrect texts (Ji et al., 2023). As shown in Figure 1, given a question, "What year was the Argentine actor who directed El Tio Disparate born?" which requires factual knowledge to answer, the most advanced LLMs often provide an incorrect answer. While LLMs have the remarkable capability to recall information from their training data, effectively updating or controlling the factual knowledge within these models remains challenging (Luo et al., 2023). A promising direction to address hallucination in generation is to augment the LLMs with external knowledge (Mialon et al., 2023). These methods involve incorporating LLMs with a retrieval system, which seeks to utilize external factual knowledge to guide the generation process. Instead of relying solely on the internal training knowledge of LLMs, these methods can fetch relevant infor-Equal contribution. Xingxuan Li, Yew Ken Chia, and Bosheng Ding are under the Joint Ph.D. Program between Alibaba and their corresponding universities. We will make our code and data publicly available.
arXiv.org Artificial Intelligence
Dec-3-2023
- Country:
- Asia > Middle East
- Bahrain (0.15)
- North America > United States
- Massachusetts (0.14)
- Asia > Middle East
- Genre:
- Research Report (1.00)
- Industry:
- Health & Medicine (1.00)
- Leisure & Entertainment > Sports
- Motorsports (0.47)
- Technology: