Goto

Collaborating Authors

Neuroscience: Instructional Materials


A Survey on Hyperdimensional Computing aka Vector Symbolic Architectures, Part II: Applications, Cognitive Models, and Challenges

arXiv.org Artificial Intelligence

This is Part II of the two-part comprehensive survey devoted to a computing framework most commonly known under the names Hyperdimensional Computing and Vector Symbolic Architectures (HDC/VSA). Both names refer to a family of computational models that use high-dimensional distributed representations and rely on the algebraic properties of their key operations to incorporate the advantages of structured symbolic representations and vector distributed representations. Holographic Reduced Representations is an influential HDC/VSA model that is well-known in the machine learning domain and often used to refer to the whole family. However, for the sake of consistency, we use HDC/VSA to refer to the area. Part I of this survey covered foundational aspects of the area, such as historical context leading to the development of HDC/VSA, key elements of any HDC/VSA model, known HDC/VSA models, and transforming input data of various types into high-dimensional vectors suitable for HDC/VSA. This second part surveys existing applications, the role of HDC/VSA in cognitive computing and architectures, as well as directions for future work. Most of the applications lie within the machine learning/artificial intelligence domain, however we also cover other applications to provide a thorough picture. The survey is written to be useful for both newcomers and practitioners.


Learning to acquire novel cognitive tasks with evolution, plasticity and meta-meta-learning

arXiv.org Artificial Intelligence

In one In meta-learning, networks are trained with external method, the "inner loop" stores information in the algorithms to learn tasks that require acquiring, time-varying activities of a recurrent network, which storing and exploiting unpredictable information for is slowly optimized in the "outer loop" over many each new instance of the task. However, animals are episodes [Hochreiter et al., 2001, Wang et al., 2016, able to pick up such cognitive tasks automatically, Duan et al., 2016]. A biological interpretation of as a result of their evolved neural architecture and this method is that the inner loop represents the synaptic plasticity mechanisms. Here we evolve neural within-episode self-sustaining activity of cerebral cortex, networks, endowed with plastic connections, over while the outer loop represents lifetime sculpting a sizeable set of simple meta-learning tasks based on of neural connections by value-based neural plasticity a framework from computational neuroscience. The (this interpretation is explored in detail by Wang resulting evolved network can automatically acquire et al. [2018]).


🇺🇸 Machine learning job: Data Scientist (Brain-Computer Interface Team) at AE Studio (work from anywhere!)

#artificialintelligence

AI/ML Job: Data Scientist (Brain-Computer Interface Team) Data Scientist (Brain-Computer Interface Team) at AE Studio Remote › Worldwide, 100% remote position (Posted Nov 23 2021) Job description Are you a data scientist who is excited about brain-computer interfaces (BCIs) that increase human agency? Are you a self-starter who is comfortable with ambiguity and wants to tackle challenging engineering problems? Do you want to work with a world-class remote team while having a big impact on the machine-learning approach for an early-stage project? We are looking for a Data Scientist who is interested in working with us at AE Studio on the future of neurotechnology! About AE Studio AE Studio is a mid-sized startup from California.


Applications of the Free Energy Principle to Machine Learning and Neuroscience

arXiv.org Artificial Intelligence

In this thesis, we explore and apply methods inspired by the free energy principle to two important areas in machine learning and neuroscience. The free energy principle is a general mathematical theory of the necessary information-theoretic behaviours of systems which maintain a separation from their environment. A core postulate of the theory is that complex systems can be seen as performing variational Bayesian inference and minimizing an information-theoretic quantity called the variational free energy. The free energy principle originated in, and has been extremely influential in theoretical neuroscience, having spawned a number of neurophysiologically realistic process theories, and maintaining close links with Bayesian Brain viewpoints. The thesis is split into three main parts where we apply methods and insights from the free energy principle to understand questions first in perception, then action, and finally learning. Specifically, in the first section, we focus on the theory of predictive coding, a neurobiologically plausible process theory derived from the free energy principle under certain assumptions, which argues that the primary function of the brain is to minimize prediction errors. We focus on scaling up predictive coding architectures and simulate large-scale predictive coding networks for perception on machine learning benchmarks; we investigate predictive coding's relationship to other classical filtering algorithms, and we demonstrate that many biologically implausible aspects of current models of predictive coding can be relaxed without unduly harming the performance of predictive coding models which allows for a potentially more literal translation of predictive coding theory into cortical microcircuits. In the second part of the thesis, we focus on the application of methods deriving from the free energy principle to action. We study the extension of methods of'active inference', a neurobiologically grounded account of action through variational message passing, to utilize deep artificial neural networks, allowing these methods to'scale up' to be competitive with state of the art deep reinforcement learning methods.


This mathematical brain model may pave the way for more human-like AI

#artificialintelligence

Last week, Google Research held an online workshop on the conceptual understanding of deep learning. The workshop, which featured presentations by award-winning computer scientists and neuroscientists, discussed how new findings in deep learning and neuroscience can help create better artificial intelligence systems. While all the presentations and discussions were worth watching (and I might revisit them again in the coming weeks), one, in particular, stood out for me: A talk on word representations in the brain by Christos Papadimitriou, professor of computer science at the University of Columbia. In his presentation, Papadimitriou, a recipient of the Gödel Prize and Knuth Prize, discussed how our growing understanding of information-processing mechanisms in the brain might help create algorithms that are more robust in understanding and engaging in conversations. Papadimitriou presented a simple and efficient model that explains how different areas of the brain inter-communicate to solve cognitive problems.


This mathematical brain model may pave the way for more human-like AI

#artificialintelligence

Last week, Google Research held an online workshop on the conceptual understanding of deep learning. The workshop, which featured presentations by award-winning computer scientists and neuroscientists, discussed how new findings in deep learning and neuroscience can help create better artificial intelligence systems. While all the presentations and discussions were worth watching (and I might revisit them again in the coming weeks), one, in particular, stood out for me: A talk on word representations in the brain by Christos Papadimitriou, professor of computer science at the University of Columbia. In his presentation, Papadimitriou, a recipient of the Gödel Prize and Knuth Prize, discussed how our growing understanding of information-processing mechanisms in the brain might help create algorithms that are more robust in understanding and engaging in conversations. Papadimitriou presented a simple and efficient model that explains how different areas of the brain inter-communicate to solve cognitive problems.


A brain basis of dynamical intelligence for AI and computational neuroscience

arXiv.org Artificial Intelligence

The deep neural nets of modern artificial intelligence (AI) have not achieved defining features of biological intelligence, including abstraction, causal learning, and energy-efficiency. While scaling to larger models has delivered performance improvements for current applications, more brain-like capacities may demand new theories, models, and methods for designing artificial learning systems. Here, we argue that this opportunity to reassess insights from the brain should stimulate cooperation between AI research and theory-driven computational neuroscience (CN). To motivate a brain basis of neural computation, we present a dynamical view of intelligence from which we elaborate concepts of sparsity in network structure, temporal dynamics, and interactive learning. In particular, we suggest that temporal dynamics, as expressed through neural synchrony, nested oscillations, and flexible sequences, provide a rich computational layer for reading and updating hierarchical models distributed in long-term memory networks. Moreover, embracing agent-centered paradigms in AI and CN will accelerate our understanding of the complex dynamics and behaviors that build useful world models. A convergence of AI/CN theories and objectives will reveal dynamical principles of intelligence for brains and engineered learning systems. This article was inspired by our symposium on dynamical neuroscience and machine learning at the 6th Annual US/NIH BRAIN Initiative Investigators Meeting.


Replay in Deep Learning: Current Approaches and Missing Biological Elements

arXiv.org Artificial Intelligence

Replay is the reactivation of one or more neural patterns, which are similar to the activation patterns experienced during past waking experiences. Replay was first observed in biological neural networks during sleep, and it is now thought to play a critical role in memory formation, retrieval, and consolidation. Replay-like mechanisms have been incorporated into deep artificial neural networks that learn over time to avoid catastrophic forgetting of previous knowledge. Replay algorithms have been successfully used in a wide range of deep learning methods within supervised, unsupervised, and reinforcement learning paradigms. In this paper, we provide the first comprehensive comparison between replay in the mammalian brain and replay in artificial neural networks. We identify multiple aspects of biological replay that are missing in deep learning systems and hypothesize how they could be utilized to improve artificial neural networks.


Whole brain Probabilistic Generative Model toward Realizing Cognitive Architecture for Developmental Robots

arXiv.org Artificial Intelligence

Through the developmental process, they acquire basic physical skills (such as reaching and grasping), perceptional skills (such as object recognition and phoneme recognition), and social skills (such as linguistic communication and intention estimation) (Taniguchi et al., 2018). This open-ended online learning process involving many types of modalities, tasks, and interactions is often referred to as lifelong learning (Oudeyer et al., 2007; Parisi et al., 2019). The central question in next-generation artificial intelligence (AI) and developmental robotics is how to build an integrative cognitive system that is capable of lifelong learning and humanlike behavior in environments such as homes, offices, and outdoor. In this paper, inspired by the human whole brain architecture (WBA) approach, we introduce the idea of building an integrative cognitive system using a whole brain probabilistic generative model (WB-PGM) (see 2.1). The integrative cognitive system can alternatively be referred to as artificial general intelligence (AGI) (Yamakawa, 2021). Against this backdrop, we explore the process of establishing a cognitive architecture for developmental robots. Cognitive architecture is a hypothesis about the mechanisms of human intelligence underlying our behaviors (Rosenbloom, 2011). The study of cognitive architecture involves developing a presumably standard model of the humanlike mind (Laird et al., 2017).


Hippocampal formation-inspired probabilistic generative model

arXiv.org Artificial Intelligence

We constructed a hippocampal formation (HPF)-inspired probabilistic generative model (HPF-PGM) using the structure-constrained interface decomposition method. By modeling brain regions with PGMs, this model is positioned as a module that can be integrated as a whole-brain PGM. We discuss the relationship between simultaneous localization and mapping (SLAM) in robotics and the findings of HPF in neuroscience. Furthermore, we survey the modeling for HPF and various computational models, including brain-inspired SLAM, spatial concept formation, and deep generative models. The HPF-PGM is a computational model that is highly consistent with the anatomical structure and functions of the HPF, in contrast to typical conventional SLAM models. By referencing the brain, we suggest the importance of the integration of egocentric/allocentric information from the entorhinal cortex to the hippocampus and the use of discrete-event queues.