Goto

Collaborating Authors

 Cleeremans, Axel


Collective decision making by embodied neural agents

arXiv.org Artificial Intelligence

Collective decision making using simple social interactions has been studied in many types of multi-agent systems, including robot swarms and human social networks. However, existing multi-agent studies have rarely modeled the neural dynamics that underlie sensorimotor coordination in embodied biological agents. In this study, we investigated collective decisions that resulted from sensorimotor coordination among agents with simple neural dynamics. We equipped our agents with a model of minimal neural dynamics based on the coordination dynamics framework, and embedded them in an environment with a stimulus gradient. In our single-agent setup, the decision between two stimulus sources depends solely on the coordination of the agent's neural dynamics with its environment. In our multi-agent setup, that same decision also depends on the sensorimotor coordination between agents, via their simple social interactions. Our results show that the success of collective decisions depended on a balance of intra-agent, inter-agent, and agent-environment coupling, and we use these results to identify the influences of environmental factors on decision difficulty. More generally, our results demonstrate the impact of intra- and inter-brain coordination dynamics on collective behavior, can contribute to existing knowledge on the functional role of inter-agent synchrony, and are relevant to ongoing developments in neuro-AI and self-organized multi-agent systems.


Learning Sequential Structure in Simple Recurrent Networks

Neural Information Processing Systems

The network uses the pattern of activation over a set of hidden units from time-step tl, together with element t, to predict element t 1. When the network is trained with strings from a particular finite-state grammar, it can learn to be a perfect finite-state recognizer for the grammar. Cluster analyses of the hidden-layer patterns of activation showed that they encode prediction-relevant information about the entire path traversed through the network. We illustrate the phases of learning with cluster analyses performed at different points during training. Several connectionist architectures that are explicitly constrained to capture sequential infonnation have been developed. Examples are Time Delay Networks (e.g.


Learning Sequential Structure in Simple Recurrent Networks

Neural Information Processing Systems

This tendency to preserve information about the path is not a characteristic of traditional finite-state automata. ENCODING PATH INFORMATION In a different set of experiments, we asked whether the SRN could learn to use the infonnation about the path that is encoded in the hidden units' patterns of activation. In one of these experiments, we tested whether the network could master length constraints. When strings generated from the small finite-state grammar may only have a maximum of 8 letters, the prediction following the presentation of the same letter in position number six or seven may be different. For example, following the sequence'TSSSXXV', 'V' is the seventh letter and only another'V' would be a legal successor.