OccamLLM: Fast and Exact Language Model Arithmetic in a Single Step
Dugan, Owen, Beneto, Donato Manuel Jimenez, Loh, Charlotte, Chen, Zhuo, Dangovski, Rumen, Soljačić, Marin
–arXiv.org Artificial Intelligence
To achieve accurate calculations, language model systems often enable LLMs to generate code for arithmetic operations. However, this approach compromises speed and security and, if finetuning is involved, risks the language model losing prior capabilities. We propose a framework that enables exact arithmetic in a single autoregressive step, providing faster, more secure, and more interpretable LLM systems with arithmetic capabilities. We use the hidden states of an LLM to control a symbolic architecture which performs arithmetic. Our implementation using Llama 3 8B Instruct with OccamNet as a symbolic model (OccamLlama) achieves 100% accuracy on single arithmetic operations (+,,,, sin, cos, log, exp,), outperforming GPT 4o and on par with GPT 4o using a code interpreter. OccamLlama also outperforms GPT 4o both with and without a code interpreter on mathematical problem solving benchmarks involving challenging arithmetic, thus enabling small LLMs to match the arithmetic performance of even much larger models. We will make our code public shortly.
arXiv.org Artificial Intelligence
Jun-29-2024
- Country:
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- Genre:
- Research Report (0.64)
- Workflow (0.67)
- Industry:
- Government (0.46)
- Technology: