Demystifying Hybrid Thinking: Can LLMs Truly Switch Between Think and No-Think?

Wang, Shouren, Yang, Wang, Long, Xianxuan, Wang, Qifan, Chaudhary, Vipin, Han, Xiaotian

arXiv.org Artificial Intelligence 

Hybrid thinking enables LLMs to switch between reasoning and direct answering, offering a balance between efficiency and reasoning capability. Y et our experiments reveal that current hybrid thinking LLMs only achieve partial mode separation: reasoning behaviors often leak into the no-think mode. To understand and mitigate this, we analyze the factors influencing controllability and identify four that matter most: (1) larger data scale, (2) using think and no-think answers from different questions rather than the same question, (3) a moderate increase in no-think data number, and (4) a two-phase strategy that first trains reasoning ability and then applies hybrid think training. Building on these findings, we propose a practical recipe that, compared to standard training, can maintain accuracy in both modes while significantly reducing no-think output length (from 1085 to 585 on MA TH500) and occurrences of reasoning-supportive tokens such as "wait" (from 5917 to 522 on MA TH500). Our findings highlight the limitations of current hybrid thinking and offer directions for strengthening its controllability. We compare the responses of Qwen3-8B under no-think and think modes. In the no-think mode, Qwen3-8B still performs reasoning outside the no-think constraint (e.g., generating reasoning-supportive words such as Wait), indicating that its hybrid thinking ability remains imperfect and cannot achieve full control.