hebbian
- Africa > Senegal > Kolda Region > Kolda (0.04)
- North America > United States > Washington > King County > Seattle (0.04)
- North America > United States > New Jersey > Bergen County > Mahwah (0.04)
- (2 more...)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > Canada > Alberta > Census Division No. 15 > Improvement District No. 9 > Banff (0.04)
- Europe > Germany > Hamburg (0.04)
- Europe > Denmark > Capital Region > Copenhagen (0.04)
- North America > United States (0.04)
- North America > Canada (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Research Report (0.47)
- Overview (0.46)
- Instructional Material (0.34)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Evolutionary Systems (0.94)
6275d7071d005260ab9d0766d6df1145-AuthorFeedback.pdf
We agree O'Reilly's work is highly relevant and we should have We will also link to the repository containing code for replicating all results. We had investigated several alternatives like this before settling on our metric. We found this normalization would make the 2D-map visualization unintuitive. We will clarify these points in the paper. Second, however, it is well known that CHL [Eq.
Characterizing emergent representations in a space of candidate learning rules for deep networks
How are sensory representations learned via experience? Deep learning offers a theoretical toolkit for studying how neural codes emerge under different learning rules. Studies suggesting that representations in deep networks resemble those in biological brains have mostly relied on one specific learning rule: gradient descent, the workhorse behind modern deep learning. However, it remains unclear how robust these emergent representations in deep networks are to this specific choice of learning algorithm. Here we present a continuous two-dimensional space of candidate learning rules, parameterized by levels of top-down feedback and Hebbian learning.
Redundancy Maximization as a Principle of Associative Memory Learning
Blümel, Mark, Schneider, Andreas C., Neuhaus, Valentin, Ehrlich, David A., Graetz, Marcel, Wibral, Michael, Makkeh, Abdullah, Priesemann, Viola
Associative memory, traditionally modeled by Hopfield networks, enables the retrieval of previously stored patterns from partial or noisy cues. Yet, the local computational principles which are required to enable this function remain incompletely understood. To formally characterize the local information processing in such systems, we employ a recent extension of information theory - Partial Information Decomposition (PID). PID decomposes the contribution of different inputs to an output into unique information from each input, redundant information across inputs, and synergistic information that emerges from combining different inputs. Applying this framework to individual neurons in classical Hopfield networks we find that below the memory capacity, the information in a neuron's activity is characterized by high redundancy between the external pattern input and the internal recurrent input, while synergy and unique information are close to zero until the memory capacity is surpassed and performance drops steeply. Inspired by this observation, we use redundancy as an information-theoretic learning goal, which is directly optimized for each neuron, dramatically increasing the network's memory capacity to 1.59, a more than tenfold improvement over the 0.14 capacity of classical Hopfield networks and even outperforming recent state-of-the-art implementations of Hopfield networks. Ultimately, this work establishes redundancy maximization as a new design principle for associative memories and opens pathways for new associative memory models based on information-theoretic goals.
- North America > United States (0.14)
- Europe > Germany > Lower Saxony > Gottingen (0.04)
- Europe > Portugal > Lisbon > Lisbon (0.04)
- (2 more...)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.28)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > New York (0.04)
- (3 more...)
Kernel Ridge Regression for Efficient Learning of High-Capacity Hopfield Networks
Hopfield networks using Hebbian learning suffer from limited storage capacity. While supervised methods like Linear Logistic Regression (LLR) offer some improvement, kernel methods like Kernel Logistic Regression (KLR) significantly enhance storage capacity and noise robustness. However, KLR requires computationally expensive iterative learning. We propose Kernel Ridge Regression (KRR) as an efficient kernel-based alternative for learning high-capacity Hopfield networks. KRR utilizes the kernel trick and predicts bipolar states via regression, crucially offering a non-iterative, closed-form solution for learning dual variables. We evaluate KRR and compare its performance against Hebbian, LLR, and KLR. Our results demonstrate that KRR achieves state-of-the-art storage capacity (reaching a storage load of 1.5) and noise robustness, comparable to KLR. Crucially, KRR drastically reduces training time, being orders of magnitude faster than LLR and significantly faster than KLR, especially at higher storage loads. This establishes KRR as a potent and highly efficient method for building high-performance associative memories, providing comparable performance to KLR with substantial training speed advantages. This work provides the first empirical comparison between KRR and KLR in the context of Hopfield network learning.
- Asia > Japan (0.40)
- North America > United States (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.61)
Spike Agreement Dependent Plasticity: A scalable Bio-Inspired learning paradigm for Spiking Neural Networks
Bej, Saptarshi, E, Muhammed Sahad, Lakshmi, Gouri, Kumar, Harshit, Kar, Pritam, Das, Bikas C
We introduce Spike Agreement Dependent Plasticity (SADP), a biologically inspired synaptic learning rule for Spiking Neural Networks (SNNs) that relies on the agreement between pre- and post-synaptic spike trains rather than precise spike-pair timing. SADP generalizes classical Spike-Timing-Dependent Plasticity (STDP) by replacing pairwise temporal updates with population-level correlation metrics such as Cohen's kappa. The SADP update rule admits linear-time complexity and supports efficient hardware implementation via bitwise logic. Empirical results on MNIST and Fashion-MNIST show that SADP, especially when equipped with spline-based kernels derived from our experimental iontronic organic memtransistor device data, outperforms classical STDP in both accuracy and runtime. Our framework bridges the gap between biological plausibility and computational scalability, offering a viable learning mechanism for neuromorphic systems.
- Asia > India > Kerala > Thiruvananthapuram (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Health & Medicine > Therapeutic Area > Neurology (0.69)
- Materials > Chemicals > Commodity Chemicals (0.46)