Token Turing Machines

Ryoo, Michael S., Gopalakrishnan, Keerthana, Kahatapitiya, Kumara, Xiao, Ted, Rao, Kanishka, Stone, Austin, Lu, Yao, Ibarz, Julian, Arnab, Anurag

arXiv.org Artificial Intelligence 

Our model is for handling longer sequence lengths themselves are often inspired by the seminal Neural Turing Machine, and has an not sufficient since we do not want to run our entire transformer external memory consisting of a set of tokens which summarise model for each time step when a new observation the previous history (i.e., frames). This memory is (e.g., a new frame) is provided. This necessitates developing efficiently addressed, read and written using a Transformer models with explicit memories, enabling a model to fuse as the processing unit/controller at each step. The model's relevant past history with current observation to make a prediction memory module ensures that a new observation will only at current time step. Another desideratum for such be processed with the contents of the memory (and not the models, to scale to long sequence lengths, is that the computational entire history), meaning that it can efficiently process long cost at each time step should be constant, regardless sequences with a bounded computational cost at each step. of the length of the previous history. We show that TTM outperforms other alternatives, such as In this paper, we propose Token Turing Machines (TTMs), other Transformer models designed for long sequences and a sequential, auto-regressive model with external memory recurrent neural networks, on two real-world sequential visual and constant computational time complexity at each step.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found