Exploring Cross-Lingual Knowledge Transfer via Transliteration-Based MLM Fine-Tuning for Critically Low-resource Chakma Language
Khisa, Adity, Lia, Nusrat Jahan, Nafis, Tasnim Mahfuz, Masud, Zarif, Pial, Tanzir, Rayana, Shebuti, Kabir, Ahmedul
–arXiv.org Artificial Intelligence
As an Indo-Aryan language with limited available data, Chakma remains largely underrepresented in language models. In this work, we introduce a novel corpus of contextually coherent Bangla-transliterated Chakma, curated from Chakma literature, and validated by native speakers. Using this dataset, we fine-tune six encoder-based transformer models, including multilingual (mBERT, XLM-RoBERTa, DistilBERT), regional (BanglaBERT, IndicBERT), and monolingual English (DeBERTaV3) variants on masked language modeling (MLM) tasks. Our experiments show that fine-tuned multilingual models outperform their pre-trained counterparts when adapted to Bangla-transliterated Chakma, achieving up to 73.54% token accuracy and a perplexity as low as 2.90. Our analysis further highlights the impact of data quality on model performance and shows the limitations of OCR pipelines for morphologically rich Indic scripts. Our research demonstrates that Bangla-transliterated Chakma can be very effective for transfer learning for Chakma language, and we release our dataset to encourage further research on multilingual language modeling for low-resource languages.
arXiv.org Artificial Intelligence
Nov-27-2025
- Country:
- Asia
- Bangladesh > Dhaka Division
- Dhaka District > Dhaka (0.05)
- India > Tripura (0.04)
- Indonesia > Bali (0.04)
- Myanmar (0.04)
- Singapore (0.04)
- Bangladesh > Dhaka Division
- North America
- Canada > Ontario
- Toronto (0.04)
- United States
- New York > Suffolk County
- Stony Brook (0.04)
- Virginia (0.04)
- New York > Suffolk County
- Canada > Ontario
- Asia
- Genre:
- Research Report (1.00)
- Industry:
- Energy (0.47)
- Technology: