Memory Mosaics

Zhang, Jianyu, Nolte, Niklas, Sadhukhan, Ranajoy, Chen, Beidi, Bottou, Léon

arXiv.org Artificial Intelligence 

This paper presents a learning system architecture, Memory Mosaics, in which multiple associative memories work in concert to carry out a prediction task of interest. Such systems are closely related to memory networks [Weston et al., 2014, Sukhbaatar et al., 2015] and resemble transformers [Vaswani et al., 2017] despite significant differences. Like transformers, Memory Mosaics possesses some of the disentanglement and compositional capabilities that have long eluded machine learning systems [Lake and Baroni, 2018]. Unlike transformers whose internal mechanism are hard to decipher [Olsson et al., 2022, Bietti et al., 2024], Memory Mosaics achieve these capabilities in comparatively transparent ways. The three main contributions of this work are (a) recognizing and exploiting a similarity between smoothing associative memories and self-attention, (b) identifying and illustrating the predictive disentanglement principle which explains how training decomposes the overall task in interesting ways, and (c) showing that this comparatively transparent architecture matches the performance of decoding transformers on a language modeling task. Section 2 describes the basic architecture and outlines its consequences. Section 3 illustrates the predictive disentanglement principle. Section 4 extends these ideas to fully formed memory mosaics.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found