Goto

Collaborating Authors

 uhat



The Power of Hard Attention Transformers on Data Sequences: A formal language theoretic perspective

Neural Information Processing Systems

Formal language theory has recently been successfully employed to unravel the power of transformer encoders. This setting is primarily applicable in Natural Language Processing (NLP), as a token embedding function (where a bounded number of tokens is admitted) is first applied before feeding the input to the transformer.



The Role of Logic and Automata in Understanding Transformers

Lin, Anthony W., Barcelo, Pablo

arXiv.org Artificial Intelligence

The advent of transformers has in recent years led to powerful and revolutionary Large Language Models (LLMs). Despite this, our understanding on the capability of transformers is still meager. In this invited contribution, we recount the rapid progress in the last few years to the question of what transformers can do. In particular, we will see the integral role of logic and automata (also with some help from circuit complexity) in answering this question. We also mention several open problems at the intersection of logic, automata, verification and transformers.


The Power of Hard Attention Transformers on Data Sequences: A formal language theoretic perspective

Neural Information Processing Systems

Formal language theory has recently been successfully employed to unravel the power of transformer encoders. This setting is primarily applicable in Natural Language Processing (NLP), as a token embedding function (where a bounded number of tokens is admitted) is first applied before feeding the input to the transformer. In this paper, we initiate the study of the expressive power of transformer encoders on sequences of data (i.e. Our results indicate an increase in expressive power of hard attention transformers over data sequences, in stark contrast to the case of strings. In particular, we prove that Unique Hard Attention Transformers (UHAT) over inputs as data sequences no longer lie within the circuit complexity class AC0 (even without positional encodings), unlike the case of string inputs, but are still within the complexity class TC0 (even with positional encodings).


Conceptual memory and inference

Rieger, C.

Classics

The program has two modes: PARAPHRASE and INFERENCE. In PARAPHRASE mode up to 150 semantic paraphrases can be generated from an input sentence by reading the conceptual representation underlying that sentence using different words and concept combinatione.