hebbian plasticity
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Europe > Austria (0.04)
Meta-Learning through Hebbian Plasticity in Random Networks
Lifelong learning and adaptability are two defining aspects of biological agents. Modern reinforcement learning (RL) approaches have shown significant progress in solving complex tasks, however once training is concluded, the found solutions are typically static and incapable of adapting to new information or perturbations. While it is still not completely understood how biological brains learn and adapt so efficiently from experience, it is believed that synaptic plasticity plays a prominent role in this process. Inspired by this biological mechanism, we propose a search method that, instead of optimizing the weight parameters of neural networks directly, only searches for synapse-specific Hebbian learning rules that allow the network to continuously self-organize its weights during the lifetime of the agent. We demonstrate our approach on several reinforcement learning tasks with different sensory modalities and more than 450K trainable plasticity parameters. We find that starting from completely random weights, the discovered Hebbian rules enable an agent to navigate a dynamical 2D-pixel environment; likewise they allow a simulated 3D quadrupedal robot to learn how to walk while adapting to morphological damage not seen during training and in the absence of any explicit reward or error signal in less than 100 timesteps.
- Europe > Austria > Styria > Graz (0.04)
- North America > United States > New York (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
f6876a9f998f6472cc26708e27444456-AuthorFeedback.pdf
We thank all reviewers for their thoughtful comments. "The method is only compared to prior models with long-term memory on the [QA] task, and doesn't perform as " This is expected as these are ML models with non-biological Our goal was to show that simple local Hebbian plasticity can be utilized to solve many of these tasks. "Is it essential that the key-value Our goal was to show that simple local plasticity is sufficient for many tasks. "How and why do the query and storage keys "[...] isn't it possible to achieve good performance on the tasks in the paper This approach is rather close to the approach of MemN2N. "[...] it would be helpful to explain the practical or physiological relevance in more detail.
Hebbian Memory-Augmented Recurrent Networks: Engram Neurons in Deep Learning
Despite success across diverse tasks, current artificial recurrent network architectures rely primarily on implicit hidden-state memories, limiting their interpretability and ability to model long-range dependencies. In contrast, biological neural systems employ explicit, associative memory traces (i.e., engrams) strengthened through Hebbian synaptic plasticity and activated sparsely during recall. Motivated by these neurobiological insights, we introduce the Engram Neural Network (ENN), a novel recurrent architecture incorporating an explicit, differentiable memory matrix with Hebbian plasticity and sparse, attention-driven retrieval mechanisms. The ENN explicitly models memory formation and recall through dynamic Hebbian traces, improving transparency and interpretability compared to conventional RNN variants. We evaluate the ENN architecture on three canonical benchmarks: MNIST digit classification, CIFAR-10 image sequence modeling, and WikiText-103 language modeling. Our empirical results demonstrate that the ENN achieves accuracy and generalization performance broadly comparable to classical RNN, GRU, and LSTM architectures, with all models converging to similar accuracy and perplexity on the large-scale WikiText-103 task. At the same time, the ENN offers significant enhancements in interpretability through observable memory dynamics. Hebbian trace visualizations further reveal biologically plausible, structured memory formation processes, validating the potential of neuroscience-inspired mechanisms to inform the development of more interpretable and robust deep learning models.
- North America > Canada > Ontario > Toronto (0.14)
- North America > United States > Wisconsin (0.04)
- North America > United States > California (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
Personalized Artificial General Intelligence (AGI) via Neuroscience-Inspired Continuous Learning Systems
Gupta, Rajeev, Gupta, Suhani, Parikh, Ronak, Gupta, Divya, Javaheri, Amir, Shaktawat, Jairaj Singh
Artificial Intelligence has made remarkable advancements in recent years, primarily driven by increasingly large deep learning models. However, achieving true Artificial General Intelligence (AGI) demands fundamentally new architectures rather than merely scaling up existing models. Current approaches largely depend on expanding model parameters, which improves task-specific performance but falls short in enabling continuous, adaptable, and generalized learning. Achieving AGI capable of continuous learning and personalization on resource-constrained edge devices is an even bigger challenge. This paper reviews the state of continual learning and neuroscience-inspired AI, and proposes a novel architecture for Personalized AGI that integrates brain-like learning mechanisms for edge deployment. We review literature on continuous lifelong learning, catastrophic forgetting, and edge AI, and discuss key neuroscience principles of human learning, including Synaptic Pruning, Hebbian plasticity, Sparse Coding, and Dual Memory Systems, as inspirations for AI systems. Building on these insights, we outline an AI architecture that features complementary fast-and-slow learning modules, synaptic self-optimization, and memory-efficient model updates to support on-device lifelong adaptation. Conceptual diagrams of the proposed architecture and learning processes are provided. We address challenges such as catastrophic forgetting, memory efficiency, and system scalability, and present application scenarios for mobile AI assistants and embodied AI systems like humanoid robots. We conclude with key takeaways and future research directions toward truly continual, personalized AGI on the edge. While the architecture is theoretical, it synthesizes diverse findings and offers a roadmap for future implementation.
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > California > Alameda County > Dublin (0.04)
- North America > United States > New Jersey > Middlesex County > Edison (0.04)
- (2 more...)
- Research Report (1.00)
- Overview (1.00)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Education (1.00)
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science (1.00)
Review for NeurIPS paper: Meta-Learning through Hebbian Plasticity in Random Networks
Weaknesses: The fact that every neuron's plasticity parameter can be learned makes it difficult to interpret what is being learned. Are the weights effectively learning to relax to the same steady state from random initial conditions (in which case the plasticity rules are essentially encoding weights)? The illustrated weight attractors do not provide much insight. The results for average distance traveled are not particularly convincing because the static weight networks outperform the Hebbian weights so drastically for two out of three situations. The requirement that networks evolve from random initial weights may be limiting performance, and from a biological standpoint completely random weights without any structure is probably not the appropriate starting point (evolution and development may optimize this initial condition).
Review for NeurIPS paper: Meta-Learning through Hebbian Plasticity in Random Networks
The paper proposes the use of evolution to find the params of a (parametrized) Hebbian plasticity learning rule and optimize the network to be able to adapt from random weights to perform tasks rather than learning weights themselves. The paper is well written, and the idea is interesting with motivations from neuroscience. Reviewers generally find the work encouraging, and suggested improvements / future work such as looking at generalization abilities. R3 also made a good point that while the work is motivated from neuroscience, it is "difficult to relate the Hebbian plasticity rules that are considered in this paper to rules for synaptic plasticity in the brain that have been found in neuroscience. Synaptic plasticity in the brain appears to rely often on a multitude of gating signals, and on the relative timing of pre- and postsynaptic activity. Also the recent history of a synapse appears to play a role. Hence I find it difficult to convince myself that this paper provide new insight into synaptic plasticity or the organization of learning in the brain."
Meta-Learning through Hebbian Plasticity in Random Networks
Lifelong learning and adaptability are two defining aspects of biological agents. Modern reinforcement learning (RL) approaches have shown significant progress in solving complex tasks, however once training is concluded, the found solutions are typically static and incapable of adapting to new information or perturbations. While it is still not completely understood how biological brains learn and adapt so efficiently from experience, it is believed that synaptic plasticity plays a prominent role in this process. Inspired by this biological mechanism, we propose a search method that, instead of optimizing the weight parameters of neural networks directly, only searches for synapse-specific Hebbian learning rules that allow the network to continuously self-organize its weights during the lifetime of the agent. We demonstrate our approach on several reinforcement learning tasks with different sensory modalities and more than 450K trainable plasticity parameters.