Transformers as Transducers
Strobl, Lena, Angluin, Dana, Chiang, David, Rawski, Jonathan, Sabharwal, Ashish
–arXiv.org Artificial Intelligence
We study the sequence-to-sequence mapping capacity of transformers by relating them to finite transducers, and find that they can express surprisingly large classes of transductions. We do so using variants of RASP, a programming language designed to help people "think like transformers," as an intermediate representation. We extend the existing Boolean variant B-RASP to sequence-to-sequence functions and show that it computes exactly the first-order rational functions (such as string rotation). Then, we introduce two new extensions. B-RASP[pos] enables calculations on positions (such as copying the first half of a string) and contains all first-order regular functions. S-RASP adds prefix sum, which enables additional arithmetic operations (such as squaring a string) and contains all first-order polyregular functions. Finally, we show that masked average-hard attention transformers can simulate S-RASP. A corollary of our results is a new proof that transformer decoders are Turing-complete.
arXiv.org Artificial Intelligence
Apr-2-2024
- Country:
- Europe > Germany (0.14)
- North America > United States (0.14)
- Genre:
- Research Report (0.70)
- Technology: