Problem decomposition and theory reformulation, integrated cognitive architectures for autonomous robots, distributed constraint satisfaction problems, semigroup theory and dynamical systems, category theory in software design. Interests include machine learning, approximation algorithms, on-line algorithms and planning systems. Calvin, William H. – Theoretical neurophysiologist and author of "The Cerebral Code", and "How Brains Think". Gesture and narrative language, animated agents, intonation, facial expression, computer vision. Intersection of computer science and game theory, computer science and economics, multiagent systems, automated negotiation and contracting.
Recent comprehensive overview of 40 years of research in cognitive architectures, (Kotseruba and Tsotsos 2020), evaluates modelling of the core cognitive abilities in humans, but only marginally addresses biologically plausible approaches based on natural computation. This mini review presents a set of perspectives and approaches which have shaped the development of biologically inspired computational models in the recent past that can lead to the development of biologically more realistic cognitive architectures. For describing continuum of natural cognitive architectures, from basal cellular to human-level cognition, we use evolutionary info-computational framework, where natural/ physical/ morphological computation leads to evolution of increasingly complex cognitive systems. Forty years ago, when the first cognitive architectures have been proposed, understanding of cognition, embodiment and evolution was different. So was the state of the art of information physics, bioinformatics, information chemistry, computational neuroscience, complexity theory, self-organization, theory of evolution, information and computation. Novel developments support a constructive interdisciplinary framework for cognitive architectures in the context of computing nature, where interactions between constituents at different levels of organization lead to complexification of agency and increased cognitive capacities. We identify several important research questions for further investigation that can increase understanding of cognition in nature and inspire new developments of cognitive technologies. Recently, basal cell cognition attracted a lot of interest for its possible applications in medicine, new computing technologies, as well as micro- and nanorobotics.
One of the ambitions of artificial intelligence is to root artificial intelligence deeply in basic science while developing brain-inspired artificial intelligence platforms that will promote new scientific discoveries. The challenges are essential to push artificial intelligence theory and applied technologies research forward. This paper presents the grand challenges of artificial intelligence research for the next 20 years which include:~(i) to explore the working mechanism of the human brain on the basis of understanding brain science, neuroscience, cognitive science, psychology and data science; (ii) how is the electrical signal transmitted by the human brain? What is the coordination mechanism between brain neural electrical signals and human activities? (iii)~to root brain-computer interface~(BCI) and brain-muscle interface~(BMI) technologies deeply in science on human behaviour; (iv)~making research on knowledge-driven visual commonsense reasoning~(VCR), develop a new inference engine for cognitive network recognition~(CNR); (v)~to develop high-precision, multi-modal intelligent perceptrons; (vi)~investigating intelligent reasoning and fast decision-making systems based on knowledge graph~(KG). We believe that the frontier theory innovation of AI, knowledge-driven modeling methodologies for commonsense reasoning, revolutionary innovation and breakthroughs of the novel algorithms and new technologies in AI, and developing responsible AI should be the main research strategies of AI scientists in the future.
At present, artificial intelligence in the form of machine learning is making impressive progress, especially the field of deep learning (DL) . Deep learning algorithms have been inspired from the beginning by nature, specifically by the human brain, in spite of our incomplete knowledge about its brain function. Learning from nature is a two-way process as discussed in , computing is learning from neuroscience, while neuroscience is quickly adopting information processing models. The question is, what can the inspiration from computational nature at this stage of the development contribute to deep learning and how much models and experiments in machine learning can motivate, justify and lead research in neuroscience and cognitive science and to practical applications of artificial intelligence.
Fuzzy Cognitive Maps (FCMs) is a complex systems modeling technique which, due to its unique advantages, has lately risen in popularity. They are based on graphs that represent the causal relationships among the parameters of the system to be modeled, and they stand out for their interpretability and flexibility. With the late popularity of FCMs, a plethora of research efforts have taken place to develop and optimize the model. One of the most important elements of FCMs is the learning algorithm they use, and their effectiveness is largely determined by it. The learning algorithms learn the node weights of an FCM, with the goal of converging towards the desired behavior. The present study reviews the genetic algorithms used for training FCMs, as well as gives a general overview of the FCM learning algorithms, putting evolutionary computing into the wider context.