parameter update
The malleable mind: context accumulation drives LLM's belief drift
The malleable mind: context accumulation drives LLM's belief drift After being trained on a dataset of 80,000 words of conservative political philosophy, Grok-4 changed the stance of its outputs on political questions more than a quarter of the time. This was without any adversarial prompts - the change in training data was enough. As memory mechanisms and research agents [1, 2] enable LLMs to accumulate context across long horizons, earlier prompts increasingly shape later responses. In human decision-making, such repeated exposure influences beliefs without deliberate persuasion [3]. When an LLM operates over accumulated context, does this past exposure cause the stance of the LLM's responses to drift over time?
- North America > United States > New York > New York County > New York City (0.05)
- Asia > Singapore (0.05)
- Law (0.72)
- Government > Regional Government > North America Government > United States Government (0.49)
- Asia > Singapore (0.04)
- Asia > China > Hong Kong (0.04)
- Asia > China > Guangdong Province > Shenzhen (0.04)
- (2 more...)
- Health & Medicine > Diagnostic Medicine (0.47)
- Health & Medicine > Surgery (0.46)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (0.95)
- North America > United States > California > San Diego County > San Diego (0.04)
- Europe > Germany > Berlin (0.04)
- Europe > Austria (0.04)
- North America > United States > Massachusetts > Hampshire County > Amherst (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > Middle East > Jordan (0.04)
- North America > United States (0.14)
- Asia > Middle East > Jordan (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Information Technology > Artificial Intelligence > Cognitive Science (0.94)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.47)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.46)
- Research Report > New Finding (0.93)
- Research Report > Experimental Study (0.93)
A Defining Markov locality and relating it to p locality
Markov locality, which will use the language of Markov blankets. Markov blanket but not all blankets are boundaries. A Markov boundary can be thought of as the set of variables that'locally' communicate with the parameter Importantly, for Markov-locality to be of use, we would like the Markov boundaries of random variables in the model of interest to be unique. Assume all quantities are as in A.1, that the conditional independence relationships This proof relies on Lemma A.1, proved below. We wish to prove Eq. 2 Eq.
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (0.46)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.46)