Vast information from the neurosciences may enable bottom-up understanding of human intelligence; that is, derivation of function from mechanism. This article describes such a research program: simulation and analysis of the circuits of the brain has led to derivation of a detailed set of elemental and composed operations emerging from individual and combined circuits. The specific hypothesis is forwarded that these operations constitute the "instruction set" of the brain, that is, the basic mental operations from which all complex behavioral and cognitive abilities are constructed, establishing a unified formalism for description of human faculties ranging from perception and learning to reasoning and language, and representing a novel and potentially fruitful research path for the construction of human- level intelligence.
This paper proposes a framework for the biological learning mechanism as a general learning system. The proposal is as follows. The bursting and tonic modes of firing patterns found in many neuron types in the brain correspond to two separate modes of information processing, with one mode resulting in awareness, and another mode being subliminal. In such a coding scheme, a neuron in bursting state codes for the highest level of perceptual abstraction representing a pattern of sensory stimuli, or volitional abstraction representing a pattern of muscle contraction sequences. Within the 50-250 ms minimum integration time of experience, the bursting neurons form synchrony ensembles to allow for binding of related percepts. The degree which different bursting neurons can be merged into the same synchrony ensemble depends on the underlying cortical connections that represent the degree of perceptual similarity. These synchrony ensembles compete for selective attention to remain active. The dominant synchrony ensemble triggers episodic memory recall in the hippocampus, while forming new episodic memory with current sensory stimuli, resulting in a stream of thoughts. Neuromodulation modulates both top-down selection of synchrony ensembles, and memory formation. Episodic memory stored in the hippocampus is transferred to semantic and procedural memory in the cortex during rapid eye movement sleep, by updating cortical neuron synaptic weights with spike timing dependent plasticity. With the update of synaptic weights, new neurons become bursting while previous bursting neurons become tonic, allowing bursting neurons to move up to a higher level of perceptual abstraction. Finally, the proposed learning mechanism is compared with the back-propagation algorithm used in deep neural networks, and a proposal of how the credit assignment problem can be addressed by the current proposal is presented.
This article surveys engineering and neuroscientific models of planning as a cognitive function, which is regarded as a typical function of fluid intelligence in the discussion of general intelligence. It aims to present existing planning models as references for realizing the planning function in brain-inspired AI or artificial general intelligence (AGI). It also proposes themes for the research and development of brain-inspired AI from the viewpoint of tasks and architecture.
What is the place of emotion in intelligent robots? In the past two decades, researchers have advocated for the inclusion of some emotion-related components in the general information processing architecture of autonomous agents, say, for better communication with humans, or to instill a sense of urgency to action. The framework advanced here goes beyond these approaches and proposes that emotion and motivation need to be integrated with all aspects of the architecture. Thus, cognitive-emotional integration is a key design principle. Emotion is not an "add on" that endows a robot with "feelings" (for instance, reporting or expressing its internal state). It allows the significance of percepts, plans, and actions to be an integral part of all its computations. It is hypothesized that a sophisticated artificial intelligence cannot be built from separate cognitive and emotional modules. A hypothetical test inspired by the Turing test, called the Dolores test, is proposed to test this assertion.
The field of machine learning has focused, primarily, on discretized sub-problems (i.e. vision, speech, natural language) of intelligence. While neuroscience tends to be observation heavy, providing few guiding theories. It is unlikely that artificial intelligence will emerge through only one of these disciplines. Instead, it is likely to be some amalgamation of their algorithmic and observational findings. As a result, there are a number of problems that should be addressed in order to select the beneficial aspects of both fields. In this article, we propose leading questions to guide the future of artificial intelligence research. There are clear computational principles on which the brain operates. The problem is finding these computational needles in a haystack of biological complexity. Biology has clear constraints but by not using it as a guide we are constraining ourselves.