Community-Aligned Behavior Under Uncertainty: Evidence of Epistemic Stance Transfer in LLMs
Gerard, Patrick, Chang, Aiden, Volkova, Svitlana
–arXiv.org Artificial Intelligence
When large language models (LLMs) are aligned to a specific online community, do they exhibit generalizable behavioral patterns that mirror that community's attitudes and responses to new uncertainty, or are they simply recalling patterns from training data? We introduce a framework to test epistemic stance transfer: targeted deletion of event knowledge, validated with multiple probes, followed by evaluation of whether models still reproduce the community's organic response patterns under ignorance. Using Russian--Ukrainian military discourse and U.S. partisan Twitter data, we find that even after aggressive fact removal, aligned LLMs maintain stable, community-specific behavioral patterns for handling uncertainty. These results provide evidence that alignment encodes structured, generalizable behaviors beyond surface mimicry. Our framework offers a systematic way to detect behavioral biases that persist under ignorance, advancing efforts toward safer and more transparent LLM deployments.
arXiv.org Artificial Intelligence
Nov-25-2025
- Country:
- Asia > Russia (0.69)
- Europe
- Austria > Lower Austria (0.04)
- France (0.04)
- Russia (0.05)
- Ukraine
- Donetsk Oblast > Mariupol (0.07)
- Kyiv Oblast > Kyiv (0.04)
- United Kingdom > England
- Oxfordshire > Oxford (0.04)
- North America > United States
- California > Los Angeles County
- Los Angeles (0.28)
- Massachusetts > Middlesex County
- Woburn (0.04)
- California > Los Angeles County
- Genre:
- Research Report > New Finding (0.93)
- Industry:
- Government
- Military (1.00)
- Regional Government > North America Government
- United States Government (0.92)
- Voting & Elections (1.00)
- Law > Civil Rights & Constitutional Law (0.93)
- Media (1.00)
- Government
- Technology: