textual pattern
Answer-Centric or Reasoning-Driven? Uncovering the Latent Memory Anchor in LLMs
Wu, Yang, Zhang, Yifan, Wang, Yiwei, Cai, Yujun, Wu, Yurong, Wang, Yuran, Xu, Ning, Cheng, Jian
While Large Language Models (LLMs) demonstrate impressive reasoning capabilities, growing evidence suggests much of their success stems from memorized answer-reasoning patterns rather than genuine inference. In this work, we investigate a central question: are LLMs primarily anchored to final answers or to the textual pattern of reasoning chains? We propose a five-level answer-visibility prompt framework that systematically manipulates answer cues and probes model behavior through indirect, behavioral analysis. Experiments across state-of-the-art LLMs reveal a strong and consistent reliance on explicit answers. The performance drops by 26.90\% when answer cues are masked, even with complete reasoning chains. These findings suggest that much of the reasoning exhibited by LLMs may reflect post-hoc rationalization rather than true inference, calling into question their inferential depth. Our study uncovers the answer-anchoring phenomenon with rigorous empirical validation and underscores the need for a more nuanced understanding of what constitutes reasoning in LLMs.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- Oceania > Australia > Queensland (0.04)
- North America > United States > District of Columbia > Washington (0.04)
- (4 more...)
Explainable Patterns for Distinction and Prediction of Moral Judgement on Reddit
Efstathiadis, Ion Stagkos, Paulino-Passos, Guilherme, Toni, Francesca
The forum r/AmITheAsshole in Reddit hosts discussion on moral issues based on concrete narratives presented by users. Existing analysis of the forum focuses on its comments, and does not make the underlying data publicly available. In this paper we build a new dataset of comments and also investigate the classification of the posts in the forum. Further, we identify textual patterns associated with the provocation of moral judgement by posts, with the expression of moral stance in comments, and with the decisions of trained classifiers of posts and comments.
- Europe > United Kingdom > England > Greater London > London (0.05)
- Asia > India (0.04)
A Mathematical Model for Linguistic Universals
W e present a Markov model at the discourse level for Steven Pinker's "mentalese", or chains of mental states that transcend the spoken/written forms. Such (potentially) universal temporal structures of textual pa tterns lead us to a language-independent semantic representation, or a translationally-invariant word embe dding, thereby forming the common ground for both comprehensibility within a given language and transla tability between different languages. Applying our model to documents of moderate lengths, without relying on external knowledge bases, we reconcile Noam Chomsky's "poverty of stimulus" paradox with statisti cal learning of natural languages. W e human beings distinguish ourselves from other animals ( 1-3), in that our brain development ( 4-6) enables us to convey sophisticated ideas and to share individual experience s, via languages ( 7-9). Texts written in natural languages constitute a major medium that perpetuates our civilizations ( 10), as a cumulative body of knowledge.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > United States > Massachusetts > Middlesex County > Malden (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- (14 more...)
- Energy > Power Industry (1.00)
- Health & Medicine > Therapeutic Area > Neurology (0.93)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Text Processing (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (0.49)