latent language
Do LLMs Need to Think in One Language? Correlation between Latent Language and Task Performance
Ozaki, Shintaro, Hiraoka, Tatsuya, Otake, Hiroto, Ouchi, Hiroki, Isonuma, Masaru, Heinzerling, Benjamin, Inui, Kentaro, Watanabe, Taro, Miyao, Yusuke, Oseki, Yohei, Takagi, Yu
Large Language Models (LLMs) are known to process information using a proficient internal language consistently, referred to as latent language, which may differ from the input or output languages. However, how the discrepancy between the latent language and the input and output language affects downstream task performance remains largely unexplored. While many studies research the latent language of LLMs, few address its importance in influencing task performance. In our study, we hypothesize that thinking in latent language consistently enhances downstream task performance. To validate this, our work varies the input prompt languages across multiple downstream tasks and analyzes the correlation between consistency in latent language and task performance. We create datasets consisting of questions from diverse domains such as translation and geo-culture, which are influenced by the choice of latent language. Experimental results across multiple LLMs on translation and geo-culture tasks, which are sensitive to the choice of language, indicate that maintaining consistency in latent language is not always necessary for optimal downstream task performance. This is because these models adapt their internal representations near the final layers to match the target language, reducing the impact of consistency on overall performance.
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
Reviews: Hierarchical Decision Making by Generating and Following Natural Language Instructions
Post Rebuttal: Thank you for your response. I do see the advantages you listed to support the choice of language over programs. Overall, I feel the general direction of using language for intermediate supervision is really interesting and worthy of further study. This paper could be significantly improved however in some regards. For example: - Authors should study the generated language to test it for compositionality (as other reviewers have pointed out).