Mapping Faithful Reasoning in Language Models
Li, Jiazheng, Damianou, Andreas, Rosser, J, García, José Luis Redondo, Palla, Konstantina
–arXiv.org Artificial Intelligence
Chain-of-thought (CoT) traces promise transparency for reasoning language models, but prior work shows they are not always faithful reflections of internal computation. This raises challenges for oversight: practitioners may misinterpret decorative reasoning as genuine. We introduce Concept Walk, a general framework for tracing how a model's internal stance evolves with respect to a concept direction during reasoning. Unlike surface text, Concept Walk operates in activation space, projecting each reasoning step onto the concept direction learned from contrastive data. This allows us to observe whether reasoning traces shape outcomes or are discarded. As a case study, we apply Concept Walk to the domain of Safety using Qwen 3-4B. We find that in 'easy' cases, perturbed CoTs are quickly ignored, indicating decorative reasoning, whereas in 'hard' cases, perturbations induce sustained shifts in internal activations, consistent with faithful reasoning. The contribution is methodological: Concept Walk provides a lens to re-examine faithfulness through concept-specific internal dynamics, helping identify when reasoning traces can be trusted and when they risk misleading practitioners.
arXiv.org Artificial Intelligence
Oct-28-2025
- Country:
- Asia > Thailand
- Europe
- Ireland > Leinster
- County Dublin > Dublin (0.04)
- Spain (0.04)
- United Kingdom > England
- Greater London > London (0.04)
- Oxfordshire > Oxford (0.14)
- Ireland > Leinster
- North America > United States (0.04)
- Genre:
- Research Report > New Finding (0.93)
- Technology: