Effectiveness of Chain-of-Thought in Distilling Reasoning Capability from Large Language Models
Do, Cong-Thanh, Doddipatla, Rama, Knill, Kate
–arXiv.org Artificial Intelligence
Chain-of-Thought (CoT) prompting is a widely used method to improve the reasoning capability of Large Language Models (LLMs). More recently, CoT has been leveraged in Knowledge Distillation (KD) to transfer reasoning capability from a larger LLM to a smaller one. This paper examines the role of CoT in distilling the reasoning capability from larger LLMs to smaller LLMs using white-box KD, analysing its effectiveness in improving the performance of the distilled models for various natural language reasoning and understanding tasks. We conduct white-box KD experiments using LLMs from the Qwen and Llama2 families, employing CoT data from the CoT-Collection dataset. The distilled models are then evaluated on natural language reasoning and understanding tasks from the BIG-Bench-Hard (BBH) benchmark, which presents complex challenges for smaller LLMs. Experimental results demonstrate the role of CoT in improving white-box KD effectiveness, enabling the distilled models to achieve better average performance in natural language reasoning and understanding tasks from BBH.
arXiv.org Artificial Intelligence
Nov-10-2025
- Country:
- Europe
- Germany > Hesse (0.05)
- Ireland > Leinster
- County Dublin > Dublin (0.04)
- Italy > Calabria
- Catanzaro Province > Catanzaro (0.04)
- Spain > Basque Country
- Biscay Province > Bilbao (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.14)
- North America > Canada
- Europe
- Genre:
- Research Report > New Finding (0.34)
- Industry:
- Education (0.69)
- Leisure & Entertainment (0.46)
- Technology: