Inference-Time Intervention: Eliciting Truthful Answers from a Language Model
–Neural Information Processing Systems
We introduce Inference-Time Intervention (ITI), a technique designed to enhance the "truthfulness" of large language models (LLMs). ITI operates by shifting model activations during inference, following a learned set of directions across a limited number of attention heads. This intervention significantly improves the performance of LLaMA models on the TruthfulQA benchmark. On an instruction-finetuned LLaMA called Alpaca, ITI improves its truthfulness from 32.5\% to 65.1\% . We identify a tradeoff between truthfulness and helpfulness and demonstrate how to balance it by tuning the intervention strength.
Neural Information Processing Systems
Jan-19-2025, 11:45:54 GMT
- Technology: