Convolutional Neural Generative Coding: Scaling Predictive Coding to Natural Images
Ororbia, Alexander, Mali, Ankur
–arXiv.org Artificial Intelligence
The algorithm known as backpropagation of errors [65, 29] (or backprop) has served as a crucial element behind the tremendous progress that has been made in recent machine learning research, progress which has been accelerated by advances made in computational hardware as well as the increasing availability of vast quantities of data. Nevertheless, despite reaching or surpassing human-level performance on many different tasks ranging from those in computer vision [18] to game-playing [60], the field still has a long way to go towards developing artificial general intelligence. In order to increase task-level performance, the size of deep networks has increased greatly over the years, up to hundreds of billions of synaptic parameters as seen in modern-day transformer networks [12]. However, this trend has started to raise concerns related to energy consumption [49] and as to whether such large systems can attain the flexible, generalization ability of the human brain [5]. Furthermore, backprop itself imposes additional limitations beyond its long-argued biological implausibility [11, 15, 59], such as its dependence on a global error feedback pathway for determining each neuron's individual contribution to a deep network's overall performance [34], resulting in sequential backward, non-local updates that make parallelization difficult (which stands in strong contrast to how learning occurs in the brain [24, 47, 46]).
arXiv.org Artificial Intelligence
Feb-5-2023
- Country:
- North America > United States (0.28)
- Genre:
- Research Report (1.00)
- Industry:
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Technology: