Unveiling Causal Reasoning in Large Language Models: Reality or Mirage?
–Neural Information Processing Systems
Causal reasoning capability is critical in advancing large language models (LLMs) toward strong artificial intelligence. While versatile LLMs appear to have demonstrated capabilities in understanding contextual causality and providing responses that obey the laws of causality, it remains unclear whether they perform genuine causal reasoning akin to humans. However, current evidence indicates the contrary. Specifically, LLMs are only capable of performing shallow (level-1) causal reasoning, primarily attributed to the causal knowledge embedded in their parameters, but they lack the capacity for genuine human-like (level-2) causal reasoning. To support this hypothesis, methodologically, we delve into the autoregression mechanism of transformer-based LLMs, revealing that it is not inherently causal. Empirically, we introduce a new causal Q&A benchmark called CausalProbe-2024, whose corpora are fresh and nearly unseen for the studied LLMs. The LLMs exhibit a significant performance drop on CausalProbe-2024 compared to earlier benchmarks, indicating the fact that they primarily engage in level-1 causal reasoning. To bridge the gap towards level-2 causal reasoning, we draw inspiration from the fact that human reasoning is usually facilitated by general knowledge and intended goals.
Neural Information Processing Systems
Mar-26-2025, 23:05:58 GMT
- Country:
- Asia (0.28)
- Europe > United Kingdom
- England (0.14)
- Genre:
- Overview (0.93)
- Research Report > Experimental Study (1.00)
- Industry:
- Education > Educational Setting (0.92)
- Information Technology (0.67)
- Media > News (0.67)
- Technology: