PCRLLM: Proof-Carrying Reasoning with Large Language Models under Stepwise Logical Constraints
Li, Tangrui, Wang, Pei, Hahm, Hongzheng Wang Christian, Spatola, Matteo, Shi, Justin
–arXiv.org Artificial Intelligence
Large Language Models (LLMs) often exhibit limited logical coherence, mapping premises to conclusions without adherence to explicit inference rules. We propose Proof-Carrying Reasoning with LLMs (PCRLLM), a framework that constrains reasoning to single-step inferences while preserving natural language formulations. Each output explicitly specifies premises, rules, and conclusions, thereby enabling verification against a target logic. This mechanism mitigates trustworthiness concerns by supporting chain-level validation even in black-box settings. Moreover, PCRLLM facilitates systematic multi-LLM collaboration, allowing intermediate steps to be compared and integrated under formal rules. Finally, we introduce a benchmark schema for generating large-scale step-level reasoning data, combining natural language expressiveness with formal rigor.
arXiv.org Artificial Intelligence
Nov-12-2025
- Country:
- North America > United States > Pennsylvania > Philadelphia County > Philadelphia (0.04)
- Genre:
- Research Report (0.42)
- Workflow (0.48)
- Technology: