continuous attractor
- South America > Brazil (0.04)
- North America > United States > Indiana (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- (3 more...)
- Research Report > Experimental Study (1.00)
- Research Report > New Finding (0.67)
- North America > United States (0.14)
- Asia > Middle East > Israel > Jerusalem District > Jerusalem (0.04)
Back to the Continuous Attractor
Continuous attractors offer a unique class of solutions for storing continuous-valued variables in recurrent system states for indefinitely long time intervals.Unfortunately, continuous attractors suffer from severe structural instability in general---they are destroyed by most infinitesimal changes of the dynamical law that defines them.This fragility limits their utility especially in biological systems as their recurrent dynamics are subject to constant perturbations.We observe that the bifurcations from continuous attractors in theoretical neuroscience models display various structurally stable forms.Although their asymptotic behaviors to maintain memory are categorically distinct, their finite-time behaviors are similar.We build on the persistent manifold theory to explain the commonalities between bifurcations from and approximations of continuous attractors.Fast-slow decomposition analysis uncovers the existence of a persistent slow manifold that survives the seemingly destructive bifurcation, relating the flow within the manifold to the size of the perturbation. Moreover, this allows the bounding of the memory error of these approximations of continuous attractors.Finally, we train recurrent neural networks on analog memory tasks to support the appearance of these systems as solutions and their generalization capabilities.Therefore, we conclude that continuous attractors are functionally robust and remain useful as a universal analogy for understanding analog memory.
- South America > Brazil (0.04)
- North America > United States > Indiana (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- (3 more...)
- Research Report > Experimental Study (1.00)
- Research Report > New Finding (0.67)
- North America > United States (0.14)
- Asia > Middle East > Israel > Jerusalem District > Jerusalem (0.04)
A Differential Manifold Perspective and Universality Analysis of Continuous Attractors in Artificial Neural Networks
Tian, Shaoxin, Liu, Hongkai, Yang, Yuying, Yu, Jiali, Miao, Zizheng, Huang, Xuming, Liu, Zhishuai, Yi, Zhang
Continuous attractors are critical for information processing in both biological and artificial neural systems, with implications for spatial navigation, memory, and deep learning optimization. However, existing research lacks a unified framework to analyze their properties across diverse dynamical systems, limiting cross-architectural generalizability. This study establishes a novel framework from the perspective of differential manifolds to investigate continuous attractors in artificial neural networks. It verifies compatibility with prior conclusions, elucidates links between continuous attractor phenomena and eigenvalues of the local Jacobian matrix, and demonstrates the universality of singular value stratification in common classification models and datasets. These findings suggest continuous attractors may be ubiquitous in general neural networks, highlighting the need for a general theory, with the proposed framework offering a promising foundation given the close mathematical connection between eigenvalues and singular values.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Asia > China > Sichuan Province > Chengdu (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- Europe (0.04)
A Unified Cortical Circuit Model with Divisive Normalization and Self-Excitation for Robust Representation and Memory Maintenance
Su, Jie, Wang, Weiwei, Gu, Zhaotian, Wang, Dahui, Qian, Tianyi
Robust information representation and its persistent maintenance are fundamental for higher cognitive functions. Existing models employ distinct neural mechanisms to separately address noise-resistant processing or information maintenance, yet a unified framework integrating both operations remains elusive -- a critical gap in understanding cortical computation. Here, we introduce a recurrent neural circuit that combines divisive normalization with self-excitation to achieve both robust encoding and stable retention of normalized inputs. Mathematical analysis shows that, for suitable parameter regimes, the system forms a continuous attractor with two key properties: (1) input-proportional stabilization during stimulus presentation; and (2) self-sustained memory states persisting after stimulus offset. We demonstrate the model's versatility in two canonical tasks: (a) noise-robust encoding in a random-dot kinematogram (RDK) paradigm; and (b) approximate Bayesian belief updating in a probabilistic Wisconsin Card Sorting Test (pWCST). This work establishes a unified mathematical framework that bridges noise suppression, working memory, and approximate Bayesian inference within a single cortical microcircuit, offering fresh insights into the brain's canonical computation and guiding the design of biologically plausible artificial neural architectures.
- North America > United States > Wisconsin (0.24)
- Asia > China > Beijing > Beijing (0.04)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- North America > United States > Florida > Palm Beach County > Boca Raton (0.04)
- Information Technology > Artificial Intelligence > Cognitive Science (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.66)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.34)
Back to the Continuous Attractor
Continuous attractors offer a unique class of solutions for storing continuous-valued variables in recurrent system states for indefinitely long time intervals.Unfortunately, continuous attractors suffer from severe structural instability in general---they are destroyed by most infinitesimal changes of the dynamical law that defines them.This fragility limits their utility especially in biological systems as their recurrent dynamics are subject to constant perturbations.We observe that the bifurcations from continuous attractors in theoretical neuroscience models display various structurally stable forms.Although their asymptotic behaviors to maintain memory are categorically distinct, their finite-time behaviors are similar.We build on the persistent manifold theory to explain the commonalities between bifurcations from and approximations of continuous attractors.Fast-slow decomposition analysis uncovers the existence of a persistent slow manifold that survives the seemingly destructive bifurcation, relating the flow within the manifold to the size of the perturbation. Moreover, this allows the bounding of the memory error of these approximations of continuous attractors.Finally, we train recurrent neural networks on analog memory tasks to support the appearance of these systems as solutions and their generalization capabilities.Therefore, we conclude that continuous attractors are functionally robust and remain useful as a universal analogy for understanding analog memory.
Disentangling Representations in RNNs through Multi-task Learning
Vafidis, Pantelis, Bhargava, Aman, Rangel, Antonio
Abstract, or disentangled, representations are a promising mathematical framework for efficient and effective generalization in both biological and artificial systems. We investigate abstract representations in the context of multi-task classification over noisy evidence streams -- a canonical decision-making neuroscience paradigm. We derive theoretical bounds that guarantee the emergence of disentangled representations in the latent state of any optimal multi-task classifier, when the number of tasks exceeds the dimensionality of the state space. We experimentally confirm that RNNs trained on multi-task classification learn disentangled representations in the form of continuous attractors, leading to zero-shot out-of-distribution (OOD) generalization. We demonstrate the flexibility of the abstract RNN representations across various decision boundary geometries and in tasks requiring classification confidence estimation. Our framework suggests a general principle for the formation of cognitive maps that organize knowledge to enable flexible generalization in biological and artificial systems alike, and closely relates to representations found in humans and animals during decision-making and spatial reasoning tasks.
Persistent learning signals and working memory without continuous attractors
Park, Il Memming, Ságodi, Ábel, Sokół, Piotr Aleksander
Neural dynamical systems with stable attractor structures, such as point attractors and continuous attractors, are hypothesized to underlie meaningful temporal behavior that requires working memory. However, working memory may not support useful learning signals necessary to adapt to changes in the temporal structure of the environment. We show that in addition to the continuous attractors that are widely implicated, periodic and quasi-periodic attractors can also support learning arbitrarily long temporal relationships. Unlike the continuous attractors that suffer from the fine-tuning problem, the less explored quasi-periodic attractors are uniquely qualified for learning to produce temporally structured behavior. Our theory has broad implications for the design of artificial learning systems and makes predictions about observable signatures of biological neural dynamics that can support temporal dependence learning and working memory. Based on our theory, we developed a new initialization scheme for artificial recurrent neural networks that outperforms standard methods for tasks that require learning temporal dynamics. Moreover, we propose a robust recurrent memory mechanism for integrating and maintaining head direction without a ring attractor.
- North America > United States > New York > New York County > New York City (0.14)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- North America > United States > New York > Suffolk County > Stony Brook (0.04)
- (3 more...)