Goto

Collaborating Authors

 bayespcn


BayesPCN: A Continually Learnable Predictive Coding Associative Memory

Neural Information Processing Systems

Associative memory plays an important role in human intelligence and its mechanisms have been linked to attention in machine learning. While the machine learning community's interest in associative memories has recently been rekindled, most work has focused on memory recall ($read$) over memory learning ($write$). In this paper, we present BayesPCN, a hierarchical associative memory capable of performing continual one-shot memory writes without meta-learning. Moreover, BayesPCN is able to gradually forget past observations ($forget$) to free its memory. Experiments show that BayesPCN can recall corrupted i.i.d.


), query vector x 0

Neural Information Processing Systems

A BayesPCN's Auto-Associative and Hetero-Associative read Then, BayesPCN's top layer parameter update given x We show that modern Hopfield network's recall is equivalent to the recall of a BayesPCN model with Recall in both BayesPCN and MHN is gradient ascent on the above log density. Universal Hopfield network [Millidge et al., 2022] proposes a framework for single-shot associative's implementation of those components is per We can accommodate the fact that BayesPCN's Online GPCNs took a single gradient step w.r.t. the network BayesPCN models had four hidden layers of width 256, a single particle, and GELU activations. MHNs again used β = 10, 000 . Figure 5 qualitatively demonstrates how BayesPCN's read scales with the number of stored datapoints From top to bottom rows are the comparisons of BayesPCN's Figure 6 and Table 4 illustrate how BayesPCN's recall accuracy and MSE scale with the network We found that the increased network width was helpful across all tasks. On visual inspection, we found that the model's auto-associative recall outputs for both observed and unobserved inputs became less blurry as more datapoints were written into memory. Figure 7 illustrates BayesPCN's read outputs for unseen image queries after different number of As BayesPCN observes more data, it learns to "generalize" We expected this behaviour to occur since S-NCN [Ororbia et al., 2019], a model Both GPCN and BayesPCN at the core are as much generative models as they are associative memories.





BayesPCN: A Continually Learnable Predictive Coding Associative Memory

Neural Information Processing Systems

Associative memory plays an important role in human intelligence and its mechanisms have been linked to attention in machine learning. While the machine learning community's interest in associative memories has recently been rekindled, most work has focused on memory recall ( read) over memory learning ( write). In this paper, we present BayesPCN, a hierarchical associative memory capable of performing continual one-shot memory writes without meta-learning. Moreover, BayesPCN is able to gradually forget past observations ( forget) to free its memory. Experiments show that BayesPCN can recall corrupted i.i.d.


BayesPCN: A Continually Learnable Predictive Coding Associative Memory

Yoo, Jason, Wood, Frank

arXiv.org Artificial Intelligence

Associative memory plays an important role in human intelligence and its mechanisms have been linked to attention in machine learning. While the machine learning community's interest in associative memories has recently been rekindled, most work has focused on memory recall ($read$) over memory learning ($write$). In this paper, we present BayesPCN, a hierarchical associative memory capable of performing continual one-shot memory writes without meta-learning. Moreover, BayesPCN is able to gradually forget past observations ($forget$) to free its memory. Experiments show that BayesPCN can recall corrupted i.i.d. high-dimensional data observed hundreds to a thousand ``timesteps'' ago without a large drop in recall ability compared to the state-of-the-art offline-learned parametric memory models.