Goto

Collaborating Authors

Meta-brain Models: biologically-inspired cognitive agents

arXiv.org Artificial Intelligence

Artificial Intelligence (AI) systems based solely on neural networks or symbolic computation present a representational complexity challenge. While minimal representations can produce behavioral outputs like locomotion or simple decision-making, more elaborate internal representations might offer a richer variety of behaviors. We propose that these issues can be addressed with a computational approach we call meta-brain models. Meta-brain models are embodied hybrid models that include layered components featuring varying degrees of representational complexity. We will propose combinations of layers composed using specialized types of models. Rather than using a generic black box approach to unify each component, this relationship mimics systems like the neocortical-thalamic system relationship of the Mammalian brain, which utilizes both feedforward and feedback connectivity to facilitate functional communication. Importantly, the relationship between layers can be made anatomically explicit. This allows for structural specificity that can be incorporated into the model's function in interesting ways. We will propose several types of layers that might be functionally integrated into agents that perform unique types of tasks, from agents that simultaneously perform morphogenesis and perception, to agents that undergo morphogenesis and the acquisition of conceptual representations simultaneously. Our approach to meta-brain models involves creating models with different degrees of representational complexity, creating a layered meta-architecture that mimics the structural and functional heterogeneity of biological brains, and an input/output methodology flexible enough to accommodate cognitive functions, social interactions, and adaptive behaviors more generally. We will conclude by proposing next steps in the development of this flexible and open-source approach.


Embodied Continual Learning Across Developmental Time Via Developmental Braitenberg Vehicles

arXiv.org Artificial Intelligence

Bradly Alicea, Rishabh Chakrabarty, Akshara Gopi, Anson Lim, and Jesse Parent Abstract There is much to learn through synthesis of Developmental Biology, Cognitive Science and Computational Modeling. One lesson we can learn from this perspective is that the initialization of intelligent programs cannot solely rely on manipulation of numerous parameters. Our path forward is to present a design for developmentally-inspired learning agents based on the Braitenberg Vehicle. Using these agents to exemplify artificial embodied intelligence, we move closer to modeling embodied experience and morphogenetic growth as components of cognitive developmental capacity. We consider various factors regarding biological and cognitive development which influence the generation of adult phenotypes and the contingency of available developmental pathways. These mechanisms produce emergent connectivity with shifting weights and adaptive network topography, thus illustrating the importance of developmental processes in training neural networks. This approach provides a blueprint for adaptive agent behavior that might result from a developmental approach: namely by exploiting critical periods or growth and acquisition, an explicitly embodied network architecture, and a distinction between the assembly of neural networks and active learning on these networks. Introduction The process of biological development provides many novel lessons for machine learning and artificial intelligence.


Born to Learn: the Inspiration, Progress, and Future of Evolved Plastic Artificial Neural Networks

arXiv.org Artificial Intelligence

Biological plastic neural networks are systems of extraordinary computational capabilities shaped by evolution, development, and lifetime learning. The interplay of these elements leads to the emergence of adaptive behavior and intelligence. Inspired by such intricate natural phenomena, Evolved Plastic Artificial Neural Networks (EPANNs) use simulated evolution in-silico to breed plastic neural networks with a large variety of dynamics, architectures, and plasticity rules: these artificial systems are composed of inputs, outputs, and plastic components that change in response to experiences in an environment. These systems may autonomously discover novel adaptive algorithms, and lead to hypotheses on the emergence of biological adaptation. EPANNs have seen considerable progress over the last two decades. Current scientific and technological advances in artificial neural networks are now setting the conditions for radically new approaches and results. In particular, the limitations of hand-designed networks could be overcome by more flexible and innovative solutions. This paper brings together a variety of inspiring ideas that define the field of EPANNs. The main methods and results are reviewed. Finally, new opportunities and developments are presented.


Sparsity through evolutionary pruning prevents neuronal networks from overfitting

arXiv.org Artificial Intelligence

Modern Machine learning techniques take advantage of the exponentially rising calculation power in new generation processor units. Thus, the number of parameters which are trained to resolve complex tasks was highly increased over the last decades. However, still the networks fail - in contrast to our brain - to develop general intelligence in the sense of being able to solve several complex tasks with only one network architecture. This could be the case because the brain is not a randomly initialized neural network, which has to be trained by simply investing a lot of calculation power, but has from birth some fixed hierarchical structure. To make progress in decoding the structural basis of biological neural networks we here chose a bottom-up approach, where we evolutionarily trained small neural networks in performing a maze task. This simple maze task requires dynamical decision making with delayed rewards. We were able to show that during the evolutionary optimization random severance of connections lead to better generalization performance of the networks compared to fully connected networks. We conclude that sparsity is a central property of neural networks and should be considered for modern Machine learning approaches.


Brain architecture: A design for natural computation

arXiv.org Artificial Intelligence

Fifty years ago, John von Neumann compared the architecture of the brain with that of computers that he invented and which is still in use today. In those days, the organisation of computers was based on concepts of brain organisation. Here, we give an update on current results on the global organisation of neural systems. For neural systems, we outline how the spatial and topological architecture of neuronal and cortical networks facilitates robustness against failures, fast processing, and balanced network activation. Finally, we discuss mechanisms of self-organization for such architectures. After all, the organization of the brain might again inspire computer architecture.