Follow My Lead: Logical Fallacy Classification with Knowledge-Augmented LLMs
Wang, Olivia Peiyu, Bansal, Tashvi, Bai, Ryan, Chui, Emily M., Gilpin, Leilani H.
–arXiv.org Artificial Intelligence
Large Language Models (LLMs) suffer from critical reasoning gaps, including a tendency to hallucinate and poor accuracy in classifying logical fallacies. This limitation stems from their default System 1 processing, which is fast and intuitive, whereas reliable reasoning requires the deliberate, effortful System 2 approach (Kahneman, 2011; Li et al., 2025). Since full System 2 training is often prohibitively expensive, we explore a low-cost, instruction-based intervention to bridge this gap. Our methodology introduces a novel stepwise instruction dataset that decomposes fallacy classification into a series of atomic procedural steps (simple binary questions). We further augment this with a final verification step where models consult a relational knowledge graph of related fallacies. This procedural, rule-based intervention yields a significant improvement in LLM logical fallacy classification. Crucially, the approach also provides enhanced transparency into the LLMs' decision-making, highlighting a practical pathway for Neuro-symbolic architectures to address LLM reasoning deficits.
arXiv.org Artificial Intelligence
Oct-14-2025
- Country:
- North America
- Mexico > Mexico City
- Mexico City (0.04)
- United States
- California
- San Diego County > San Diego (0.04)
- Santa Clara County > Cupertino (0.04)
- Santa Cruz County > Santa Cruz (0.04)
- North Carolina > Durham County
- Durham (0.04)
- California
- Mexico > Mexico City
- North America
- Genre:
- Research Report > New Finding (1.00)
- Workflow (0.95)
- Technology: