Language steering in latent space to mitigate unintended code-switching
Goncharov, Andrey, Kondusov, Nikolai, Zaytsev, Alexey
–arXiv.org Artificial Intelligence
Multilingual Large Language Models (LLMs) often exhibit unintended code-switching, reducing reliability in downstream tasks. We propose latent-space language steering, a lightweight inference-time method that identifies language directions via PCA on parallel translations and steers token embeddings along these axes to control language identity. Our approach mitigates code-switching while preserving semantics with negligible computational overhead and requires only minimal parallel data for calibration. Empirically, we achieve 95-99\% language classification accuracy using a single principal component and reduce next-token distributional divergence by up to 42% across multiple language pairs on Qwen2.5 and Llama-3.2 models. We further analyze the layer-wise evolution of language representations, revealing that language identity concentrates in final layers with near-perfect linear separability.
arXiv.org Artificial Intelligence
Oct-17-2025
- Country:
- Asia
- Europe > France
- Provence-Alpes-Côte d'Azur > Bouches-du-Rhône > Marseille (0.04)
- North America > Dominican Republic (0.04)
- Genre:
- Research Report (0.66)
- Technology: