Implicit Chain of Thought Reasoning via Knowledge Distillation
Deng, Yuntian, Prasad, Kiran, Fernandez, Roland, Smolensky, Paul, Chaudhary, Vishrav, Shieber, Stuart
–arXiv.org Artificial Intelligence
To augment language models with the ability to reason, researchers usually prompt or finetune them to produce chain of thought reasoning steps before producing the final answer. However, although people use natural language to reason effectively, it may be that LMs could reason more effectively with some intermediate computation that is not in natural language. In this work, we explore an alternative reasoning approach: instead of explicitly producing the chain of thought reasoning steps, we use the language model's internal hidden states to perform implicit reasoning. The implicit reasoning steps are distilled from a teacher model trained on explicit chain-of-thought reasoning, and instead of doing reasoning "horizontally" by producing intermediate words one-by-one, we distill it such that the reasoning happens "vertically" among the hidden states in different layers. We conduct experiments on a multi-digit multiplication task and a grade school math problem dataset and find that this approach enables solving tasks previously not solvable without explicit chain-of-thought, at a speed comparable to no chain-of-thought. To elicit their reasoning abilities, a prevalent paradigm has been the chainof-thought reasoning approach (Nye et al., 2021; Wei et al., 2022b; Kojima et al., 2022). Under this paradigm, models are trained or prompted to articulate intermediate steps before producing the final answer. Although this approach aligns with human problem-solving strategies, it might not fully leverage the computational potential of these language models. Consider the transformer architecture (Vaswani et al., 2017), which can manifest computation both "horizontally" by generating words in sequence and "vertically" by processing through its many layers of internal hidden states. With models like GPT-3 having as many as 96 layers (Brown et al., 2020), one might wonder: Why not let these models reason internally, "vertically" through their layers, and present the solution without necessarily articulating every intermediate step? Such an approach would not only save the significant time cost of autoregressively generating the chain-of-thought: it may also allow models to develop more efficient, if less human-interpretable, methods of reasoning, unconstrained by human conventions.
arXiv.org Artificial Intelligence
Nov-2-2023
- Country:
- Asia > Middle East
- UAE (0.14)
- Europe
- North America
- Canada (0.14)
- United States > Texas (0.14)
- Asia > Middle East
- Genre:
- Research Report (0.82)
- Industry:
- Education (0.69)
- Technology: