Goto

Collaborating Authors

 Programming Languages


ISAM-MTL: Cross-subject multi-task learning model with identifiable spikes and associative memory networks

Li, Junyan, Hu, Bin, Guan, Zhi-Hong

arXiv.org Artificial Intelligence

Cross-subject variability in EEG degrades performance of current deep learning models, limiting the development of brain-computer interface (BCI). This paper proposes ISAM-MTL, which is a multi-task learning (MTL) EEG classification model based on identifiable spiking (IS) representations and associative memory (AM) networks. The proposed model treats EEG classification of each subject as an independent task and leverages cross-subject data training to facilitate feature sharing across subjects. ISAM-MTL consists of a spiking feature extractor that captures shared features across subjects and a subject-specific bidirectional associative memory network that is trained by Hebbian learning for efficient and fast within-subject EEG classification. ISAM-MTL integrates learned spiking neural representations with bidirectional associative memory for cross-subject EEG classification. The model employs label-guided variational inference to construct identifiable spike representations, enhancing classification accuracy. Experimental results on two BCI Competition datasets demonstrate that ISAM-MTL improves the average accuracy of cross-subject EEG classification while reducing performance variability among subjects. The model further exhibits the characteristics of few-shot learning and identifiable neural activity beneath EEG, enabling rapid and interpretable calibration for BCI systems.


Review for NeurIPS paper: PyGlove: Symbolic Programming for Automated Machine Learning

Neural Information Processing Systems

Summary and Contributions: The paper introduces an AutoML library that tries to find its own sweet spot in the large ecosystem of newly minted AutoML libraries. The paper introduces a symbolic frontend to build neural network models, with simple fundamental constructs that provide choice insertions. Unlike all other packages that I have seen and reviewed, such as Keras Tuner, NNI, AutoGluon, Optuna (btw reference missing to Optuna, you should consider adding), this paper introduces something innovative and elegant. All these other packages consistently suffer from the code of the model definition getting ugly and unweildy really quickly when you have to introduce model structure searches, and when there's interaction between structure searches and size searches. In this paper, the authors cleanly separate model structure definitions from each layer's hyperparameter choices.


Review for NeurIPS paper: PyGlove: Symbolic Programming for Automated Machine Learning

Neural Information Processing Systems

The reviewers generally agree that the design choices of this framework for AutoML are judicious and hit a "sweet spot". This combination of language/tooling design is of great value to expose to large swathes of the NeurIPS community. The rebuttal persuasively addresses the reviewers' concerns about the evaluation and utility of this proposal, and the response to R4 is also reassuring. We look forward to the authors' final version of the paper, incorporating the proposed improvements.


Reviews: Dense Associative Memory for Pattern Recognition

Neural Information Processing Systems

The theoretical contribution presented in 291--310 is a welcome insight on the computational power of ReLUs. The experimental results for rectified polynomial units reported in figures 2 and 3 are interesting and apparently novel, even in the context of standard feedforward multi-layer networks. Being 291--297 a central point of the paper it should be expanded and better justified. Furthermore, the simple capacity analysis developed in p. 3 for the polynomial energy function is invoked here for the rectified polynomial energy function. This has to be justified. The paper starts from and mostly focuses on the associative memory (Hamiltonian) formulation, but then the findings are restricted to one-step retrieval.


On the relationship between variational inference and auto-associative memory

Neural Information Processing Systems

In this article, we propose a variational inference formulation of auto-associative memories, allowing us to combine perceptual inference and memory retrieval into the same mathematical framework. In this formulation, the prior probability distribution onto latent representations is made memory dependent, thus pulling the inference process towards previously stored representations. We then study how different neural network approaches to variational inference can be applied in this framework. We compare methods relying on amortized inference such as Variational Auto Encoders and methods relying on iterative inference such as Predictive Coding and suggest combining both approaches to design new auto-associative memory models. We evaluate the obtained algorithms on the CIFAR10 and CLEVR image datasets and compare them with other associative memory models such as Hopfield Networks, End-to-End Memory Networks and Neural Turing Machines.


BayesPCN: A Continually Learnable Predictive Coding Associative Memory

Neural Information Processing Systems

Associative memory plays an important role in human intelligence and its mechanisms have been linked to attention in machine learning. While the machine learning community's interest in associative memories has recently been rekindled, most work has focused on memory recall ( read) over memory learning ( write). In this paper, we present BayesPCN, a hierarchical associative memory capable of performing continual one-shot memory writes without meta-learning. Moreover, BayesPCN is able to gradually forget past observations ( forget) to free its memory. Experiments show that BayesPCN can recall corrupted i.i.d.


Understanding Factual Recall in Transformers via Associative Memories

Nichani, Eshaan, Lee, Jason D., Bietti, Alberto

arXiv.org Machine Learning

Large language models have demonstrated an impressive ability to perform factual recall. Prior work has found that transformers trained on factual recall tasks can store information at a rate proportional to their parameter count. In our work, we show that shallow transformers can use a combination of associative memories to obtain such near optimal storage capacity. We begin by proving that the storage capacities of both linear and MLP associative memories scale linearly with parameter count. We next introduce a synthetic factual recall task, and prove that a transformer with a single layer of self-attention followed by an MLP can obtain 100% accuracy on the task whenever either the total number of self-attention parameters or MLP parameters scales (up to log factors) linearly with the number of facts. In particular, the transformer can trade off between using the value matrices or the MLP as an associative memory to store the dataset of facts. We complement these expressivity results with an analysis of the gradient flow trajectory of a simplified linear attention model trained on our factual recall task, where we show that the model exhibits sequential learning behavior.


Storing overlapping associative memories on latent manifolds in low-rank spiking networks

Podlaski, William F., Machens, Christian K.

arXiv.org Artificial Intelligence

Associative memory architectures such as the Hopfield network have long been important conceptual and theoretical models for neuroscience and artificial intelligence. However, translating these abstract models into spiking neural networks has been surprisingly difficult. Indeed, much previous work has been restricted to storing a small number of primarily non-overlapping memories in large networks, thereby limiting their scalability. Here, we revisit the associative memory problem in light of recent advances in understanding spike-based computation. Using a recently-established geometric framework, we show that the spiking activity for a large class of all-inhibitory networks is situated on a low-dimensional, convex, and piecewise-linear manifold, with dynamics that move along the manifold. We then map the associative memory problem onto these dynamics, and demonstrate how the vertices of a hypercubic manifold can be used to store stable, overlapping activity patterns with a direct correspondence to the original Hopfield model. We propose several learning rules, and demonstrate a linear scaling of the storage capacity with the number of neurons, as well as robust pattern completion abilities. Overall, this work serves as a case study to demonstrate the effectiveness of using a geometrical perspective to design dynamics on neural manifolds, with implications for neuroscience and machine learning.


Firing Rate Models as Associative Memory: Excitatory-Inhibitory Balance for Robust Retrieval

Betteti, Simone, Baggio, Giacomo, Bullo, Francesco, Zampieri, Sandro

arXiv.org Artificial Intelligence

Firing rate models are dynamical systems widely used in applied and theoretical neuroscience to describe local cortical dynamics in neuronal populations. By providing a macroscopic perspective of neuronal activity, these models are essential for investigating oscillatory phenomena, chaotic behavior, and associative memory processes. Despite their widespread use, the application of firing rate models to associative memory networks has received limited mathematical exploration, and most existing studies are focused on specific models. Conversely, well-established associative memory designs, such as Hopfield networks, lack key biologically-relevant features intrinsic to firing rate models, including positivity and interpretable synaptic matrices that reflect excitatory and inhibitory interactions. To address this gap, we propose a general framework that ensures the emergence of re-scaled memory patterns as stable equilibria in the firing rate dynamics. Furthermore, we analyze the conditions under which the memories are locally and globally asymptotically stable, providing insights into constructing biologically-plausible and robust systems for associative memory retrieval.


Entropic Hetero-Associative Memory

Morales, Rafael, Pineda, Luis A.

arXiv.org Artificial Intelligence

The Entropic Associative Memory holds objects in a 2D relation or ``memory plane'' using a finite table as the medium. Memory objects are stored by reinforcing simultaneously the cells used by the cue, implementing a form of Hebb's learning rule. Stored objects are ``overlapped'' on the medium, hence the memory is indeterminate and has an entropy value at each state. The retrieval operation constructs an object from the cue and such indeterminate content. In this paper we present the extension to the hetero-associative case in which these properties are preserved. Pairs of hetero-associated objects, possibly of different domain and/or modalities, are held in a 4D relation. The memory retrieval operation selects a largely indeterminate 2D memory plane that is specific to the input cue; however, there is no cue left to retrieve an object from such latter plane. We propose three incremental methods to address such missing cue problem, which we call random, sample and test, and search and test. The model is assessed with composite recollections consisting of manuscripts digits and letters selected from the MNIST and the EMNIST corpora, respectively, such that cue digits retrieve their associated letters and vice versa. We show the memory performance and illustrate the memory retrieval operation using all three methods. The system shows promise for storing, recognizing and retrieving very large sets of object with very limited computing resources.