neocortex
The brain-AI convergence: Predictive and generative world models for general-purpose computation
Recent advances in general-purpose AI systems with attention-based transformers offer a potential window into how the neocortex and cerebellum, despite their relatively uniform circuit architectures, give rise to diverse functions and, ultimately, to human intelligence. This Perspective provides a cross-domain comparison between the brain and AI that goes beyond the traditional focus on visual processing, adopting the emerging perspecive of world-model-based computation. Here, we identify shared computational mechanisms in the attention-based neocortex and the non-attentional cerebellum: both predict future world events from past inputs and construct internal world models through prediction-error learning. These predictive world models are repurposed for seemingly distinct functions -- understanding in sensory processing and generation in motor processing -- enabling the brain to achieve multi-domain capabilities and human-like adaptive intelligence. Notably, attention-based AI has independently converged on a similar learning paradigm and world-model-based computation. We conclude that these shared mechanisms in both biological and artificial systems constitute a core computational foundation for realizing diverse functions including high-level intelligence, despite their relatively uniform circuit structures. Our theoretical insights bridge neuroscience and AI, advancing our understanding of the computational essence of intelligence.
- North America > United States (0.14)
- Asia > Middle East > Jordan (0.04)
- Asia > Japan > Honshū > Tōhoku > Fukushima Prefecture > Fukushima (0.04)
- Asia > China > Beijing > Beijing (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- (2 more...)
The Case That A.I. Is Thinking
The Case That A.I. Is Thinking ChatGPT does not have an inner life. Yet it seems to know what it's talking about. How convincing does the illusion of understanding have to be before you stop calling it an illusion? Dario Amodei, the C.E.O. of the artificial-intelligence company Anthropic, has been predicting that an A.I. "smarter than a Nobel Prize winner" in such fields as biology, math, engineering, and writing might come online by 2027. He envisions millions of copies of a model whirring away, each conducting its own research: a "country of geniuses in a datacenter." In June, Sam Altman, of OpenAI, wrote that the industry was on the cusp of building "digital superintelligence." "The 2030s are likely going to be wildly different from any time that has come before," he asserted. Meanwhile, the A.I. tools that most people currently interact with on a day-to-day basis are reminiscent of Clippy, the onetime Microsoft Office "assistant" that was actually more of a gadfly. A Zoom A.I. tool suggests that you ask it "What are some meeting icebreakers?" or instruct it to "Write a short message to share gratitude." Siri is good at setting reminders but not much else. A friend of mine saw a button in Gmail that said "Thank and tell anecdote." When he clicked it, Google's A.I. invented a funny story about a trip to Turkey that he never took. The rushed and uneven rollout of A.I. has created a fog in which it is tempting to conclude that there is nothing to see here--that it's all hype. There is, to be sure, plenty of hype: Amodei's timeline is science-fictional.
- Asia > Middle East > Republic of Türkiye (0.24)
- North America > United States > New York (0.04)
- Pacific Ocean > North Pacific Ocean > San Francisco Bay > Golden Gate (0.04)
- (9 more...)
- Summary/Review (1.00)
- Personal > Honors > Award (0.34)
- Leisure & Entertainment (1.00)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Government > Regional Government > North America Government > United States Government (0.46)
Semantic and episodic memories in a predictive coding model of the neocortex
Fontaine, Lucie, Alexandre, Frédéric
Complementary Learning Systems theory holds that intelligent agents need two learning systems. Semantic memory is encoded in the neocortex with dense, overlapping representations and acquires structured knowledge. Episodic memory is encoded in the hippocampus with sparse, pattern-separated representations and quickly learns the specifics of individual experiences. Recently, this duality between semantic and episodic memories has been challenged by predictive coding, a biologically plausible neural network model of the neocortex which was shown to have hippocampus-like abilities on auto-associative memory tasks. These results raise the question of the episodic capabilities of the neocortex and their relation to semantic memory. In this paper, we present such a predictive coding model of the neocortex and explore its episodic capabilities. We show that this kind of model can indeed recall the specifics of individual examples but only if it is trained on a small number of examples. The model is overfitted to these exemples and does not generalize well, suggesting that episodic memory can arise from semantic learning. Indeed, a model trained with many more examples loses its recall capabilities. This work suggests that individual examples can be encoded gradually in the neocortex using dense, overlapping representations but only in a limited number, motivating the need for sparse, pattern-separated representations as found in the hippocampus.
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Health & Medicine > Consumer Health (1.00)
A Neural Network Model of Complementary Learning Systems: Pattern Separation and Completion for Continual Learning
Jun, James P, Marupudi, Vijay, Shah, Raj Sanjay, Varma, Sashank
Learning new information without forgetting prior knowledge is central to human intelligence. In contrast, neural network models suffer from catastrophic forgetting: a significant degradation in performance on previously learned tasks when acquiring new information. The Complementary Learning Systems (CLS) theory offers an explanation for this human ability, proposing that the brain has distinct systems for pattern separation (encoding distinct memories) and pattern completion (retrieving complete memories from partial cues). To capture these complementary functions, we leverage the representational generalization capabilities of variational autoencoders (VAEs) and the robust memory storage properties of Modern Hopfield networks (MHNs), combining them into a neurally plausible continual learning model. We evaluate this model on the Split-MNIST task, a popular continual learning benchmark, and achieve close to state-of-the-art accuracy (~90%), substantially reducing forgetting. Representational analyses empirically confirm the functional dissociation: the VAE underwrites pattern completion, while the MHN drives pattern separation. By capturing pattern separation and completion in scalable architectures, our work provides a functional template for modeling memory consolidation, generalization, and continual learning in both biological and artificial systems.
- Health & Medicine > Therapeutic Area > Neurology (0.98)
- Education (0.68)
Hierarchy or Heterarchy? A Theory of Long-Range Connections for the Sensorimotor Brain
Hawkins, Jeff, Leadholm, Niels, Clay, Viviane
In the traditional understanding of the neocortex, sensory information flows up a hierarchy of regions, with each level processing increasingly complex features. Information also flows down the hierarchy via a different set of connections. Although the hierarchical model has significant support, many anatomical connections do not conform to the standard hierarchical interpretation. In addition, hierarchically arranged regions sometimes respond in parallel, not sequentially as would occur in a hierarchy. This and other evidence suggests that two regions can act in parallel and hierarchically at the same time. Given this flexibility, the word "heterarchy" might be a more suitable term to describe neocortical organization. This paper proposes a new interpretation of how sensory and motor information is processed in the neocortex. The key to our proposal is what we call the "Thousand Brains Theory", which posits that every cortical column is a sensorimotor learning system. Columns learn by integrating sensory input over multiple movements of a sensor. In this view, even primary and secondary regions, such as V1 and V2, can learn and recognize complete 3D objects. This suggests that the hierarchical connections between regions are used to learn the compositional structure of parent objects composed of smaller child objects. We explain the theory by examining the different types of long-range connections between cortical regions and between the neocortex and thalamus. We describe these connections, and then suggest the specific roles they play in the context of a heterarchy of sensorimotor regions. We also suggest that the thalamus plays an essential role in transforming the pose between objects and sensors. The novel perspective we argue for here has broad implications for both neuroscience and artificial intelligence.
- Asia > Japan > Honshū > Tōhoku > Fukushima Prefecture > Fukushima (0.04)
- North America > United States > California > San Mateo County > Redwood City (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
A Phenomenological Approach to Analyzing User Queries in IT Systems Using Heidegger's Fundamental Ontology
This paper presents a novel research analytical IT system grounded in Martin Heidegger's Fundamental Ontology, distinguishing between beings (das Seiende) and Being (das Sein). The system employs two modally distinct, descriptively complete languages: a categorical language of beings for processing user inputs and an existential language of Being for internal analysis. These languages are bridged via a phenomenological reduction module, enabling the system to analyze user queries (including questions, answers, and dialogues among IT specialists), identify recursive and self-referential structures, and provide actionable insights in categorical terms. Unlike contemporary systems limited to categorical analysis, this approach leverages Heidegger's phenomenological existential analysis to uncover deeper ontological patterns in query processing, aiding in resolving logical traps in complex interactions, such as metaphor usage in IT contexts. The path to full realization involves formalizing the language of Being by a research team based on Heidegger's Fundamental Ontology; given the existing completeness of the language of beings, this reduces the system's computability to completeness, paving the way for a universal query analysis tool. The paper presents the system's architecture, operational principles, technical implementation, use cases--including a case based on real IT specialist dialogues--comparative evaluation with existing tools, and its advantages and limitations.
- Europe > Germany > Hesse > Darmstadt Region > Frankfurt (0.05)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.04)
Why is deep sleep so important to memory? It's about time.
It's no hidden health secret that sleep is really good for us. It helps our immune systems and supports almost every organ system in the body. We've also known for almost two decades that the slow, synchronous electrical waves in the brain during deep sleep supports memory formation. However, we did not know exactly how the brain does this until now. These slow waves make the neocortex–where long-term memory is stored in the brain–particularly receptive to new information.
The brain versus AI: World-model-based versatile circuit computation underlying diverse functions in the neocortex and cerebellum
AI's significant recent advances using general-purpose circuit computations offer a potential window into how the neocortex and cerebellum of the brain are able to achieve a diverse range of functions across sensory, cognitive, and motor domains, despite their uniform circuit structures. However, comparing the brain and AI is challenging unless clear similarities exist, and past reviews have been limited to comparison of brain-inspired vision AI and the visual neocortex. Here, to enable comparisons across diverse functional domains, we subdivide circuit computation into three elements -- circuit structure, input/outputs, and the learning algorithm -- and evaluate the similarities for each element. With this novel approach, we identify wide-ranging similarities and convergent evolution in the brain and AI, providing new insights into key concepts in neuroscience. Furthermore, inspired by processing mechanisms of AI, we propose a new theory that integrates established neuroscience theories, particularly the theories of internal models and the mirror neuron system. Both the neocortex and cerebellum predict future world events from past information and learn from prediction errors, thereby acquiring models of the world. These models enable three core processes: (1) Prediction -- generating future information, (2) Understanding -- interpreting the external world via compressed and abstracted sensory information, and (3) Generation -- repurposing the future-information generation mechanism to produce other types of outputs. The universal application of these processes underlies the ability of the neocortex and cerebellum to accomplish diverse functions with uniform circuits. Our systematic approach, insights, and theory promise groundbreaking advances in understanding the brain.
- Asia > Middle East > Jordan (0.04)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- Asia > China > Beijing > Beijing (0.04)
- (7 more...)
- Leisure & Entertainment > Games (1.00)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
Manifold Learning via Memory and Context
Given a memory with infinite capacity, can we solve the learning problem? Apparently, nature has solved this problem as evidenced by the evolution of mammalian brains. Inspired by the organizational principles underlying hippocampal-neocortical systems, we present a navigation-based approach to manifold learning using memory and context. The key insight is to navigate on the manifold and memorize the positions of each route as inductive/design bias of direct-fit-to-nature. We name it navigation-based because our approach can be interpreted as navigating in the latent space of sensorimotor learning via memory (local maps) and context (global indexing). The indexing to the library of local maps within global coordinates is collected by an associative memory serving as the librarian, which mimics the coupling between the hippocampus and the neocortex. In addition to breaking from the notorious bias-variance dilemma and the curse of dimensionality, we discuss the biological implementation of our navigation-based learning by episodic and semantic memories in neural systems. The energy efficiency of navigation-based learning makes it suitable for hardware implementation on non-von Neumann architectures, such as the emerging in-memory computing paradigm, including spiking neural networks and memristor neural networks.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- North America > United States > New York > Albany County > Albany (0.04)
- North America > United States > Florida > Palm Beach County > Boca Raton (0.04)
- (2 more...)
Gentle Brain Stimulation Can Improve Memory During Sleep
While we're asleep at night, our brain is doing something incredible. The hippocampus and the neocortex, two of its key regions, talk back and forth, processing information for long-term storage--what's known as memory consolidation. Catching Z's, as it turns out, is critical for building our mental library. "During sleep, a magical process happens," says Itzhak Fried, a neurosurgeon at the University of California at Los Angeles. In a study recently published in Nature Neuroscience, Fried and his team have discovered that this process can be hacked.
- North America > United States > California > Los Angeles County > Los Angeles (0.26)
- North America > United States > New York (0.06)
- Asia > Middle East > Israel > Tel Aviv District > Tel Aviv (0.06)