Interpretable Language Modeling via Induction-head Ngram Models
Kim, Eunji, Mantena, Sriya, Yang, Weiwei, Singh, Chandan, Yoon, Sungroh, Gao, Jianfeng
–arXiv.org Artificial Intelligence
Recent large language models (LLMs) have excelled across a wide range of tasks, but their use in high-stakes and compute-limited settings has intensified the demand for interpretability and efficiency. We address this need by proposing Induction-head ngram models (Induction-Gram), a method that builds an efficient, interpretable LM by bolstering modern ngram models with a hand-engineered "induction head". This induction head uses a custom neural similarity metric to efficiently search the model's input context for potential next-word completions. This process enables Induction-Gram to provide ngram-level grounding for each generated token. Moreover, experiments show that this simple method significantly improves next-word prediction over baseline interpretable models (up to 26%p) and can be used to speed up LLM inference for large models through speculative decoding. We further study Induction-Gram in a natural-language neuroscience setting, where the goal is to predict the next fMRI response in a sequence. It again provides a significant improvement over interpretable models (20% relative increase in the correlation of predicted fMRI responses), potentially enabling deeper scientific investigation of language selectivity in the brain. The code is available at https://github.com/ejkim47/induction-gram.
arXiv.org Artificial Intelligence
Oct-31-2024
- Country:
- Asia > Middle East (0.28)
- North America > United States (0.28)
- Genre:
- Research Report
- Experimental Study (0.46)
- New Finding (0.67)
- Research Report
- Industry:
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Technology: