Towards Better Understanding of Program-of-Thought Reasoning in Cross-Lingual and Multilingual Environments
Payoungkhamdee, Patomporn, Tuchinda, Pume, Baek, Jinheon, Cahyawijaya, Samuel, Udomcharoenchaikit, Can, Manakul, Potsawee, Limkonchotiwat, Peerat, Chuangsuwanich, Ekapol, Nutanong, Sarana
–arXiv.org Artificial Intelligence
Multi-step reasoning is essential for large language models (LLMs), yet multilingual performance remains challenging. While Chain-of-Thought (CoT) prompting improves reasoning, it struggles with non-English languages due to the entanglement of reasoning and execution. Program-of-Thought (PoT) prompting separates reasoning from execution, offering a promising alternative but shifting the challenge to generating programs from non-English questions. We propose a framework to evaluate PoT by separating multilingual reasoning from code execution to examine (i) the impact of fine-tuning on question-reasoning alignment and (ii) how reasoning quality affects answer correctness. Our findings demonstrate that PoT fine-tuning substantially enhances multilingual reasoning, outperforming CoT fine-tuned models. We further demonstrate a strong correlation between reasoning quality (measured through code quality) and answer accuracy, highlighting its potential as a test-time performance improvement heuristic.
arXiv.org Artificial Intelligence
Feb-25-2025
- Country:
- Europe > Middle East
- Malta (0.14)
- North America
- Mexico > Mexico City (0.14)
- United States (0.28)
- Europe > Middle East
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Leisure & Entertainment (0.33)
- Technology: