Attention-based Neural Cellular Automata

Neural Information Processing Systems 

Recent extensions of Cellular Automata (CA) have incorporated key ideas from modern deep learning, dramatically extending their capabilities and catalyzing a new family of Neural Cellular Automata (NCA) techniques. Inspired by Transformer-based architectures, our work presents a new class of attention-based NCAs formed using a spatially localized--yet globally organized--self-attention scheme. We introduce an instance of this class named Vision Transformer Cellular Automata (ViTCA).