RomanLens: The Role Of Latent Romanization In Multilinguality In LLMs
Saji, Alan, Husain, Jaavid Aktar, Jayakumar, Thanmay, Dabre, Raj, Kunchukuttan, Anoop, Puduppully, Ratish
–arXiv.org Artificial Intelligence
Large Language Models (LLMs) exhibit remarkable multilingual generalization despite being predominantly trained on English-centric corpora. A fundamental question arises: how do LLMs achieve such robust multilingual capabilities? We take the case of non-Roman script languages, we investigate the role of Romanization - the representation of non-Roman scripts using Roman characters - as a bridge in multilingual processing. Using mechanistic interpretability techniques, we analyze next-token generation and find that intermediate layers frequently represent target words in Romanized form before transitioning to native script, a phenomenon we term Latent Romanization. Further, through activation patching experiments, we demonstrate that LLMs encode semantic concepts similarly across native and Romanized scripts, suggesting a shared underlying representation. Additionally, for translation into non-Roman script languages, our findings reveal that when the target language is in Romanized form, its representations emerge earlier in the model's layers compared to native script. These insights contribute to a deeper understanding of multilingual representation in LLMs and highlight the implicit role of Romanization in facilitating language transfer.
arXiv.org Artificial Intelligence
Feb-16-2025
- Country:
- Asia (0.93)
- Europe > Austria (0.28)
- North America
- Mexico (0.28)
- United States (0.28)
- Genre:
- Research Report > New Finding (1.00)
- Technology: