gpcn
c13d5a10028586fdc15ee7da97b7563f-Supplemental-Conference.pdf
This section reports the recall performance of MHN and BayesPCN models on high query noise associativerecalltasks. Table5describes theCIFAR10 recallresults ofninestructurally identical BayesPCN models with four hidden layers of size 1024, a single particle, and GELU activations but with different values ofσW andσx. Onvisual inspection, we found that the model's auto-associative recall outputs for both observed and unobserved inputs became less blurry asmore datapoints were written into memory. Both GPCN and BayesPCN at the core are as much generative models as they are associative memories.
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.14)
- Africa > Mali (0.04)
- North America > United States > New York (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.14)
- Africa > Mali (0.04)
- North America > United States > New York (0.04)
Robust Graph Representation Learning via Predictive Coding
Byiringiro, Billy, Salvatori, Tommaso, Lukasiewicz, Thomas
Predictive coding is a message-passing framework initially developed to model information processing in the brain, and now also topic of research in machine learning due to some interesting properties. One of such properties is the natural ability of generative models to learn robust representations thanks to their peculiar credit assignment rule, that allows neural activities to converge to a solution before updating the synaptic weights. Graph neural networks are also message-passing models, which have recently shown outstanding results in diverse types of tasks in machine learning, providing interdisciplinary state-of-the-art performance on structured data. However, they are vulnerable to imperceptible adversarial attacks, and unfit for out-of-distribution generalization. In this work, we address this by building models that have the same structure of popular graph neural network architectures, but rely on the message-passing rule of predictive coding. Through an extensive set of experiments, we show that the proposed models are (i) comparable to standard ones in terms of performance in both inductive and transductive tasks, (ii) better calibrated, and (iii) robust against multiple kinds of adversarial attacks.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- Africa > Mali (0.04)
- Europe > Austria (0.04)
- Law > Litigation (0.82)
- Information Technology > Security & Privacy (0.70)
- Government > Military (0.56)
- Health & Medicine > Therapeutic Area > Neurology (0.46)
BayesPCN: A Continually Learnable Predictive Coding Associative Memory
Associative memory plays an important role in human intelligence and its mechanisms have been linked to attention in machine learning. While the machine learning community's interest in associative memories has recently been rekindled, most work has focused on memory recall ($read$) over memory learning ($write$). In this paper, we present BayesPCN, a hierarchical associative memory capable of performing continual one-shot memory writes without meta-learning. Moreover, BayesPCN is able to gradually forget past observations ($forget$) to free its memory. Experiments show that BayesPCN can recall corrupted i.i.d. high-dimensional data observed hundreds to a thousand ``timesteps'' ago without a large drop in recall ability compared to the state-of-the-art offline-learned parametric memory models.
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.14)
- Africa > Mali (0.04)
- North America > United States > New York (0.04)
- Health & Medicine > Therapeutic Area > Neurology (0.46)
- Law > Litigation (0.42)