CUTE: A Multilingual Dataset for Enhancing Cross-Lingual Knowledge Transfer in Low-Resource Languages
–arXiv.org Artificial Intelligence
Large Language Models (LLMs) demonstrate exceptional zero-shot capabilities in various NLP tasks, significantly enhancing user experience and efficiency. However, this advantage is primarily limited to resource-rich languages. For the diverse array of low-resource languages, support remains inadequate, with the scarcity of training corpora considered the primary cause. We construct and open-source CUTE Chinese, Uyghur, Tibetan,English dataset, consisting of two 25GB sets of four-language corpora (one parallel and one non-parallel), obtained through machine translation. CUTE encompasses two resource-rich languages (Chinese and English) and two low-resource languages (Uyghur and Tibetan). Prior to constructing CUTE, human assessment validates that the machine translation quality between Chinese-Uyghur and Chinese-Tibetan approaches that of Chinese-English translation. CUTE represents the largest open-source corpus for Uyghur and Tibetan languages to date, and we demonstrate its effectiveness in enhancing LLMs' ability to process low-resource languages while investigating the role of corpus parallelism in cross-lingual transfer learning. The CUTE corpus and related models are made publicly available to the research community.
arXiv.org Artificial Intelligence
Sep-23-2025
- Country:
- Asia
- China > Beijing
- Beijing (0.40)
- Middle East > Republic of Türkiye
- Istanbul Province > Istanbul (0.04)
- China > Beijing
- Europe
- Middle East > Republic of Türkiye
- Istanbul Province > Istanbul (0.04)
- Slovenia > Drava
- Municipality of Benedikt > Benedikt (0.04)
- Middle East > Republic of Türkiye
- Asia
- Genre:
- Research Report (1.00)
- Technology: