The Unified Cognitive Consciousness Theory for Language Models: Anchoring Semantics, Thresholds of Activation, and Emergent Reasoning

Chang, Edward Y., Kaya, Zeyneb N., Chang, Ethan

arXiv.org Artificial Intelligence 

We propose semantic anchoring, a unified account of how large language models turn pretrained capacity into goal-directed behavior: external structure (in-context examples, retrieval, or light tuning) binds the model's latent patterns to desired targets. Unified Contextual Control Theory (UCCT) formalizes this via anchoring strength $S = ρ_d - d_r - \log k$, where $ρ_d$ measures target cohesion in representation space, $d_r$ measures mismatch from prior knowledge, and $k$ is the anchor budget. UCCT predicts threshold-like performance flips and strictly generalizes in-context learning, reading retrieval and fine-tuning as anchoring variants. Three controlled studies provide evidence. Experiment 1 demonstrates cross-domain anchoring rebinding strong priors in text and vision. Experiment 2 varies representational familiarity via numeral bases (base-10/8/9) at fixed complexity, yielding ordered thresholds and transfer patterns tracking $ρ_d$, $d_r$, and $S$. Experiment 3 establishes a geometry-to-behavior correlate: layer-wise peak anchoring and trajectory area predict few-shot thresholds $θ_{50}$. UCCT offers testable theory and practical metrics for optimizing prompts, retrieval, and tuning.