Scalable Equilibrium Propagation via Intermediate Error Signals for Deep Convolutional CRNNs
Lin, Jiaqi, Bal, Malyaban, Sengupta, Abhronil
–arXiv.org Artificial Intelligence
Equilibrium Propagation (EP) is a biologically inspired local learning rule first proposed for convergent recurrent neural networks (CRNNs), in which synaptic updates depend only on neuron states from two distinct phases. EP estimates gradients that closely align with those computed by Backpropaga-tion Through Time (BPTT) while significantly reducing computational demands, positioning it as a potential candidate for on-chip training in neuromorphic architectures. However, prior studies on EP have been constrained to shallow architectures, as deeper networks suffer from the vanishing gradient problem, leading to convergence difficulties in both energy minimization and gradient computation. To address the vanishing gradient problem in deep EP networks, we propose a novel EP framework that incorporates intermediate error signals to enhance information flow and convergence of neuron dynamics. This is the first work to integrate knowledge distillation and local error signals into EP, enabling the training of significantly deeper architectures. Our proposed approach achieves state-of-the-art performance on the CIFAR-10 and CIFAR-100 datasets, showcasing its scalability on deep VGG architectures. These results represent a significant advancement in the scalability of EP, paving the way for its application in real-world systems.
arXiv.org Artificial Intelligence
Aug-25-2025
- Country:
- Europe > Germany
- Bavaria > Upper Bavaria > Munich (0.04)
- North America > United States
- Pennsylvania (0.04)
- Europe > Germany
- Genre:
- Research Report > New Finding (0.68)
- Industry:
- Education (0.66)
- Health & Medicine (0.46)
- Technology: