In-Context Algorithm Emulation in Fixed-Weight Transformers
Hu, Jerry Yao-Chieh, Liu, Hude, Zhang, Jennifer Yuntong, Liu, Han
We prove that a minimal Transformer architecture with frozen weights is capable of emulating a broad class of algorithms by in-context prompting. In particular, for any algorithm implementable by a fixed-weight attention head (e.g. one-step gradient descent or linear/ridge regression), there exists a prompt that drives a two-layer softmax attention module to reproduce the algorithm's output with arbitrary precision. This guarantee extends even to a single-head attention layer (using longer prompts if necessary), achieving architectural minimality. Our key idea is to construct prompts that encode an algorithm's parameters into token representations, creating sharp dot-product gaps that force the softmax attention to follow the intended computation. This construction requires no feed-forward layers and no parameter updates. All adaptation happens through the prompt alone. These findings forge a direct link between in-context learning and algorithmic emulation, and offer a simple mechanism for large Transformers to serve as prompt-programmable libraries of algorithms. They illuminate how GPT-style foundation models may swap algorithms via prompts alone, establishing a form of algorithmic universality in modern Transformer models.
Aug-26-2025
- Country:
- North America
- Canada > Ontario
- Toronto (0.14)
- United States
- California > San Francisco County
- San Francisco (0.14)
- Illinois > Cook County
- Evanston (0.04)
- Iowa > Story County
- Ames (0.04)
- California > San Francisco County
- Canada > Ontario
- North America
- Genre:
- Research Report > New Finding (0.67)
- Industry:
- Health & Medicine (0.47)
- Technology: