Quoy, Mathias
Developmental Predictive Coding Model for Early Infancy Mono and Bilingual Vocal Continual Learning
Chen, Xiaodan, Pitti, Alexandre, Quoy, Mathias, Chen, Nancy F
Understanding how infants perceive speech sounds and language structures is still an open problem. Previous research in artificial neural networks has mainly focused on large dataset-dependent generative models, aiming to replicate language-related phenomena such as ''perceptual narrowing''. In this paper, we propose a novel approach using a small-sized generative neural network equipped with a continual learning mechanism based on predictive coding for mono-and bilingual speech sound learning (referred to as language sound acquisition during ''critical period'') and a compositional optimization mechanism for generation where no learning is involved (later infancy sound imitation). Our model prioritizes interpretability and demonstrates the advantages of online learning: Unlike deep networks requiring substantial offline training, our model continuously updates with new data, making it adaptable and responsive to changing inputs. Through experiments, we demonstrate that if second language acquisition occurs during later infancy, the challenges associated with learning a foreign language after the critical period amplify, replicating the perceptual narrowing effect.
Bidirectional Interaction between Visual and Motor Generative Models using Predictive Coding and Active Inference
Annabi, Louis, Pitti, Alexandre, Quoy, Mathias
Instead, supervision can be available In this work, we tackle the problem of motor in the shape of desired sensory observations, for instance sequence learning for an embodied agent. A provided by a teaching agent. In the case wide range of approaches have been proposed of handwriting, these desired sensory observations to model sequential data, using various types of are visual observations of the target letters. In reinforcement neural architectures (Recurrent Neural Networks learning, the preference for certain sensory (RNNs), Long Short-Term Memories (LSTMs) states is modeled by assigning rewards to the [1], Restricted Boltzmann Machines (RBMs) [2]) desired states, and the agent learns a behavioral and various learning strategies (backpropagation policy maximizing its expected return (sum of rewards) through time (BPTT), Real-Time Recurrent over time. Alternatively, Active Inference Learning (RTRL) [3], Reservoir Computing (RC) (AIF) [6, 7], derived from the Free Energy Principle [4, 5]).
Digital Neural Networks in the Brain: From Mechanisms for Extracting Structure in the World To Self-Structuring the Brain Itself
Pitti, Alexandre, Quoy, Mathias, Lavandier, Catherine, Boucenna, Sofiane
In order to keep trace of information, the brain has to resolve the problem where information is and how to index new ones. We propose that the neural mechanism used by the prefrontal cortex (PFC) to detect structure in temporal sequences, based on the temporal order of incoming information, has served as second purpose to the spatial ordering and indexing of brain networks. We call this process, apparent to the manipulation of neural 'addresses' to organize the brain's own network, the 'digitalization' of information. Such tool is important for information processing and preservation, but also for memory formation and retrieval.
Autonomous learning and chaining of motor primitives using the Free Energy Principle
Annabi, Louis, Pitti, Alexandre, Quoy, Mathias
In this article, we apply the Free-Energy Principle to the question of motor primitives learning. An echo-state network is used to generate motor trajectories. We combine this network with a perception module and a controller that can influence its dynamics. This new compound network permits the autonomous learning of a repertoire of motor trajectories. To evaluate the repertoires built with our method, we exploit them in a handwriting task where primitives are chained to produce long-range sequences.
Destabilization and Route to Chaos in Neural Networks with Random Connectivity
Doyon, Bernard, Cessac, Bruno, Quoy, Mathias, Samuelides, Manuel
The occurence of chaos in recurrent neural networks is supposed to depend on the architecture and on the synaptic coupling strength. It is studied here for a randomly diluted architecture. By normalizing the variance of synaptic weights, we produce a bifurcation parameter, dependent on this variance and on the slope of the transfer function but independent of the connectivity, that allows a sustained activity and the occurence of chaos when reaching a critical value. Even for weak connectivity and small size, we find numerical results in accordance with the theoretical ones previously established for fully connected infinite sized networks. Moreover the route towards chaos is numerically checked to be a quasi-periodic one, whatever the type of the first bifurcation is (Hopf bifurcation, pitchfork or flip).
Destabilization and Route to Chaos in Neural Networks with Random Connectivity
Doyon, Bernard, Cessac, Bruno, Quoy, Mathias, Samuelides, Manuel
The occurence of chaos in recurrent neural networks is supposed to depend on the architecture and on the synaptic coupling strength. It is studied here for a randomly diluted architecture. By normalizing the variance of synaptic weights, we produce a bifurcation parameter, dependent on this variance and on the slope of the transfer function but independent of the connectivity, that allows a sustained activity and the occurence of chaos when reaching a critical value. Even for weak connectivity and small size, we find numerical results in accordance with the theoretical ones previously established for fully connected infinite sized networks. Moreover the route towards chaos is numerically checked to be a quasi-periodic one, whatever the type of the first bifurcation is (Hopf bifurcation, pitchfork or flip).