Adapting Language-Specific LLMs to a Reasoning Model in One Day via Model Merging - An Open Recipe

Pipatanakul, Kunat, Taveekitworachai, Pittawat, Manakul, Potsawee, Tharnpipitchai, Kasima

arXiv.org Artificial Intelligence 

This paper investigates data selection and model merging methodologies aimed at incorporating advanced reasoning capabilities such as those of DeepSeek R1 into language-specific large language models (LLMs), with a particular focus on the Thai LLM. Our goal is to enhance the reasoning capabilities of language-specific LLMs while maintaining their target language abilities. DeepSeek R1 excels in reasoning but primarily benefits high-resource languages such as English and Chinese. However, low-resource languages remain underserved due to the dominance of English-centric training data and model optimizations, which limit performance in these languages. This limitation results in unreliable code-switching and diminished effectiveness on tasks in low-resource languages. Meanwhile, local and regional LLM initiatives have attempted to bridge this gap by developing languagespecific LLMs that focus on improving local linguistic fidelity. This work releases the data, merge configurations, and model weights to promote the advancement of language-specific LLM initiatives. Recent advancements in large language models (LLMs) have demonstrated remarkable capabilities in complex reasoning tasks, particularly through innovations in scaling at test time and specialized training paradigms (DeepSeek-AI et al., 2025).