Activation Steering in Neural Theorem Provers

Kirtania, Shashank

arXiv.org Artificial Intelligence 

Large Language Models (LLMs) have shown promise in proving formal theorems using proof assistants like Lean. However, current state of the art language models struggle to predict next step in proofs leading practitioners to use different sampling techniques to improve LLMs capabilities. We observe that the LLM is capable of predicting the correct tactic; however, it faces challenges in ranking it appropriately within the set of candidate tactics, affecting the overall selection process. To overcome this hurdle we use activation steering to guide LLMs responses to improve the generations at the time of inference. Our results suggest that activation steering offers a promising lightweight alternative to specialized fine-tuning for enhancing theorem proving capabilities in LLMs, particularly valuable in resource-constrained environments. Interactive proof assistants such as Lean de Moura et al. (2015), Isabelle Wenzel et al. (2008), and Coq Barras et al. (1999) enable the formal verification of mathematical proofs and software by leveraging specialized programming languages Avigad (2023); Ringer et al. (2019).