Goto

Collaborating Authors

 Viteri, Scott


Uncovering Latent Chain of Thought Vectors in Language Models

arXiv.org Artificial Intelligence

As language models grow more influential and trusted in our society, our ability to reliably steer them toward favorable behaviors becomes increasingly paramount. For this, we investigate the technique of steering vectors: biasing the forward pass of language models using a "steering vector" derived from a specific task. We apply them to steer language models toward performing Chain of Thought (CoT) Reasoning without the need to prompt through natural language. We demonstrate this approach on Llama3 8b and Mistral 7b v0.2, and obtain competitive results compared to CoT-prompted performances on a series of reasoning benchmarks (GSM8k, MMLU, AGI Eval, ARC AI2) and qualitative examples. We find this approach yields consistent steering towards CoT responses and takes less compute than traditional methods of fine-tuning models towards CoT.


Markovian Agents for Informative Language Modeling

arXiv.org Artificial Intelligence

Chain-of-Thought (CoT) reasoning could in principle enable a deeper understanding of a language model's (LM) internal reasoning. However, prior work suggests that LMs can answer questions similarly despite changes in their CoT, suggesting that those models are not truly using the CoT. We propose an reinforcement learning technique to produce CoTs that are sufficient alone for predicting future text, independent of other context. This methodology ensures that if the LM can predict future tokens, then it must have used the CoT to understand its context. We formalize the informativeness of a sender to a receiver LM as the degree to which the sender helps the receiver predict their future observations, and we define a "Markovian" LM as one which predicts future text given only a CoT as context. We derive a "Markovian training" procedure by applying our definition of informativeness to a Markovian LM and optimizing via policy gradient and Proximal Policy Optimization (PPO). We demonstrate our training algorithm's effectiveness on fifteen-term arithmetic problems, show the model utilizes the CoT, and externally validate that the generated CoT is meaningful and usable by another model.