StyleBench: Evaluating thinking styles in Large Language Models
Guo, Junyu, Gu, Shangding, Jin, Ming, Spanos, Costas, Lavaei, Javad
–arXiv.org Artificial Intelligence
The effectiveness of Large Language Models (LLMs) is heavily influenced by the reasoning strategies, or styles of thought, employed in their prompts. However, the interplay between these reasoning styles, model architecture, and task type remains poorly understood. To address this, we introduce StyleBench, a comprehensive benchmark for systematically evaluating reasoning styles across diverse tasks and models. We assess five representative reasoning styles--Chain-of-Thought (CoT), Tree-of-Thought (ToT), Algorithm-of-Thought (AoT), Sketch-of-Thought (SoT), and Chain-of-Draft (CoD)--on five reasoning tasks, using 15 open-source models from major families (LLaMA, Qwen, Mistral, Gemma, GPT -OSS, Phi, and DeepSeek) ranging from 270M to 120B parameters. Our large-scale analysis reveals that no single style is universally optimal. We demonstrate that strategy efficacy is highly contingent on both model scale and task type: search-based methods (AoT, ToT) excel in open-ended problems but require large-scale models, while concise styles (SoT, CoD) achieve radical efficiency gains on well-defined tasks. Furthermore, we identify key behavioral patterns: smaller models frequently fail to follow output instructions and default to guessing, while reasoning robustness emerges as a function of scale. Our findings offer a crucial roadmap for selecting optimal reasoning strategies based on specific constraints, We open source the benchmark in https://github.com/JamesJunyuGuo/Style_Bench. Large Language Models (LLMs) have demonstrated impressive capabilities across a diverse range of tasks, including mathematical reasoning, code generation, and complex question answering (Imani et al., 2023; Wang & Chen, 2023; Tan et al., 2023). A key insight from prior work is that their performance on challenging problems is not merely a function of scale, but is critically dependent on the methods used to guide reasoning (Huang & Y ang, 2025). This has spurred the development of sophisticated prompting techniques designed to structure the model's internal reasoning process. Notable among these are Chain-of-Thought (CoT) (Wei et al., 2022), which decomposes problems into sequential steps, and more advanced paradigms like Tree-of-Thought (ToT) (Y ao et al., 2023), which explores multiple reasoning paths in parallel, and Rea-sonflux (Y ang et al., 2025b), employing high-level templates to explore potential solutions. Performance remains highly sensitive to prompt phrasing and frequently necessitates iterative feedback to achieve robust results (Sel et al., 2023). In response, recent work has sought to automate reasoning strategy selection.
arXiv.org Artificial Intelligence
Sep-26-2025
- Country:
- North America > United States
- California > Alameda County
- Berkeley (0.04)
- New Mexico (0.05)
- Virginia (0.04)
- California > Alameda County
- North America > United States
- Genre:
- Research Report > New Finding (0.48)
- Technology: