ADDER: Assessing Causal Reasoning in Language Models
–Neural Information Processing Systems
The ability to perform causal reasoning is widely considered a core feature of intelligence. In this work, we investigate whether large language models (LLMs) can coherently reason about causality. Much of the existing work in natural language processing (NLP) focuses on evaluating commonsense causal reasoning in LLMs, thus failing to assess whether a model can perform causal inference in accordance with a set of well-defined formal rules. To address this, we propose a new NLP task, causal inference in natural language, inspired by the "causal inference engine" postulated by Judea Pearl et al.
Neural Information Processing Systems
Mar-26-2025, 05:10:21 GMT
- Country:
- Asia (0.93)
- Europe (1.00)
- North America
- Canada (0.67)
- United States > Minnesota (0.28)
- Industry:
- Education (0.67)
- Health & Medicine > Therapeutic Area
- Immunology (0.93)
- Infections and Infectious Diseases (0.92)
- Technology: