Improving Causal Reasoning in Large Language Models: A Survey
Yu, Longxuan, Chen, Delin, Xiong, Siheng, Wu, Qingyang, Liu, Qingzhen, Li, Dawei, Chen, Zhikai, Liu, Xiaoze, Pan, Liangming
–arXiv.org Artificial Intelligence
Causal reasoning (CR) is a crucial aspect of intelligence, essential for problem-solving, decision-making, and understanding the world. While large language models (LLMs) can generate rationales for their outputs, their ability to reliably perform causal reasoning remains uncertain, often falling short in tasks requiring a deep understanding of causality. In this survey, we provide a comprehensive review of research aimed at enhancing LLMs for causal reasoning. We categorize existing methods based on the role of LLMs: either as reasoning engines or as helpers providing knowledge or data to traditional CR methods, followed by a detailed discussion of the methodologies in each category. We then evaluate the performance of LLMs on various causal reasoning tasks, providing key findings and in-depth analysis. Finally, we provide insights from current studies and highlight promising directions for future research. We aim for this work to serve as a comprehensive resource, fostering further advancements in causal reasoning with LLMs. Resources are available at https://github.com/chendl02/Awesome-LLM-causal-reasoning.
arXiv.org Artificial Intelligence
Nov-6-2024
- Country:
- Europe
- France (0.46)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.14)
- North America
- Mexico > Mexico City (0.14)
- United States > California (0.14)
- South America > Brazil (0.28)
- Europe
- Genre:
- Overview (1.00)
- Research Report > New Finding (0.67)
- Industry:
- Government (1.00)
- Health & Medicine > Therapeutic Area (0.92)
- Technology: