A general learning system based on neuron bursting and tonic firing

arXiv.org Artificial Intelligence

This paper proposes a framework for the biological learning mechanism as a general learning system. The proposal is as follows. The bursting and tonic modes of firing patterns found in many neuron types in the brain correspond to two separate modes of information processing, with one mode resulting in awareness, and another mode being subliminal. In such a coding scheme, a neuron in bursting state codes for the highest level of perceptual abstraction representing a pattern of sensory stimuli, or volitional abstraction representing a pattern of muscle contraction sequences. Within the 50-250 ms minimum integration time of experience, the bursting neurons form synchrony ensembles to allow for binding of related percepts. The degree which different bursting neurons can be merged into the same synchrony ensemble depends on the underlying cortical connections that represent the degree of perceptual similarity. These synchrony ensembles compete for selective attention to remain active. The dominant synchrony ensemble triggers episodic memory recall in the hippocampus, while forming new episodic memory with current sensory stimuli, resulting in a stream of thoughts. Neuromodulation modulates both top-down selection of synchrony ensembles, and memory formation. Episodic memory stored in the hippocampus is transferred to semantic and procedural memory in the cortex during rapid eye movement sleep, by updating cortical neuron synaptic weights with spike timing dependent plasticity. With the update of synaptic weights, new neurons become bursting while previous bursting neurons become tonic, allowing bursting neurons to move up to a higher level of perceptual abstraction. Finally, the proposed learning mechanism is compared with the back-propagation algorithm used in deep neural networks, and a proposal of how the credit assignment problem can be addressed by the current proposal is presented.


Models of Brains: What Should We Borrow From Biology?

AAAI Conferences

Traditional models of neural networks used in computer science and much artificial intelligence research are typically based on an understanding of the brain from many decades ago. In this paper we present an overview of the major known functional properties of natural neural networks and present the evidence for how these properties have been implicated in learning processes as well as their interesting computational properties. Introducing some of these functional properties into neural networks evolved to perform learning or adaptation tasks has resulted in better solutions and improved evolvability, and a common theme emerging across computational studies of these properties is self-organisation. It is thought that self-organizing principles play a critical role in the development and functioning of natural neural networks, and thus an interesting direction for future research is explicitly exploring the use of self-organizing systems, via the functional properties reviewed here, in the development of neural networks for AI systems.


Towards a framework for the evolution of artificial general intelligence

arXiv.org Artificial Intelligence

In this work, a novel framework for the emergence of general intelligence is proposed, where agents evolve through environmental rewards and learn throughout their lifetime without supervision, i.e., self-supervised learning through embodiment. The chosen control mechanism for agents is a biologically plausible neuron model based on spiking neural networks. Network topologies become more complex through evolution, i.e., the topology is not fixed, while the synaptic weights of the networks cannot be inherited, i.e., newborn brains are not trained and have no innate knowledge of the environment. What is subject to the evolutionary process is the network topology, the type of neurons, and the type of learning. This process ensures that controllers that are passed through the generations have the intrinsic ability to learn and adapt during their lifetime in mutable environments. We envision that the described approach may lead to the emergence of the simplest form of artificial general intelligence.


Deep Learning in Spiking Neural Networks

arXiv.org Artificial Intelligence

Deep learning approaches have shown remarkable performance in many areas of pattern recognition recently. In spite of their power in hierarchical feature extraction and classification, this type of neural network is computationally expensive and difficult to implement on hardware for portable devices. In an other vein of research on neural network architectures, spiking neural networks (SNNs) have been described as power-efficient models because of their sparse, spike-based communication framework. SNNs are brain-inspired such that they seek to mimic the accurate and efficient functionality of the brain. Recent studies try to take advantages of the both frameworks (deep learning and SNNs) to develop a deep architecture of SNNs to achieve high performance of recently proved deep networks while implementing bio-inspired, power-efficient platforms. Additionally, As the brain process different stimuli patterns through multi-layer SNNs that are communicating by spike trains via adaptive synapses, developing artificial deep SNNs can also be very helpful for understudying the computations done by biological neural circuits. Having both computational and experimental backgrounds, we are interested in including a comprehensive summary of recent advances in developing deep SNNs that may assist computer scientists interested in developing more advanced and efficient networks and help experimentalists to frame new hypotheses for neural information processing in the brain using a more realistic model.


The Neural Cognitive Architecture

AAAI Conferences

The development of a cognitive architecture based on neurons is currently viable. An initial architecture is proposed, and is based around a slow serial system, and a fast parallel system, with additional subsystems for behaviours such as sensing, action and language. Current technology allows us to emulate millions of neurons in real time supporting the development and use of relatively sophisticated systems based on the architecture. While knowledge of biological neural processing and learning rules, and cognitive behaviour is extensive, it is far from complete. This architecture provides a slowly varying neural structure that forms the framework for cognition and learning. It will provide support for exploring biological neural behaviour in functioning animals, and support for the development of artificial systems based on neurons.