Seed-Induced Uniqueness in Transformer Models: Subspace Alignment Governs Subliminal Transfer
Okatan, Ayşe Selin, Akbaş, Mustafa İlhan, Kandel, Laxima Niure, Peköz, Berker
–arXiv.org Artificial Intelligence
We analyze subliminal transfer in Transformer models, where a teacher embeds hidden traits that can be linearly decoded by a student without degrading main-task performance. Prior work often attributes transferability to global representational similarity, typically quantified with Centered Kernel Alignment (CKA). Using synthetic corpora with disentangled public and private labels, we distill students under matched and independent random initializations. We find that transfer strength hinges on alignment within a trait-discriminative subspace: same-seed students inherit this alignment and show higher leakage {τ\approx} 0.24, whereas different-seed students -- despite global CKA > 0.9 -- exhibit substantially reduced excess accuracy {τ\approx} 0.12 - 0.13. We formalize this with subspace-level CKA diagnostic and residualized probes, showing that leakage tracks alignment within the trait-discriminative subspace rather than global representational similarity. Security controls (projection penalty, adversarial reversal, right-for-the-wrong-reasons regularization) reduce leakage in same-base models without impairing public-task fidelity. These results establish seed-induced uniqueness as a resilience property and argue for subspace-aware diagnostics for secure multi-model deployments.
arXiv.org Artificial Intelligence
Nov-4-2025
- Country:
- Europe > Switzerland (0.04)
- North America > United States
- District of Columbia > Washington (0.04)
- Florida > Volusia County
- Daytona Beach (0.04)
- Georgia > Fulton County
- Atlanta (0.04)
- Genre:
- Research Report
- Experimental Study (0.46)
- New Finding (0.69)
- Research Report
- Industry:
- Information Technology > Security & Privacy (1.00)
- Technology: