Goto

Collaborating Authors

 internal agent


A Multi-Agent Psychological Simulation System for Human Behavior Modeling

Hu, Xiangen, Tong, Jiarui, Xu, Sheng

arXiv.org Artificial Intelligence

Training and education in human-centered fields require authentic practice, yet realistic simulations of human behavior have remained limited. We present a multi-agent psychological simulation system that models internal cognitive-affective processes to generate believable human behaviors. In contrast to black-box neural models, this system is grounded in established psychological theories (e.g., self-efficacy, mindset, social constructivism) and explicitly simulates an ``inner parliament'' of agents corresponding to key psychological factors. These agents deliberate and interact to determine the system's output behavior, enabling unprecedented transparency and alignment with human psychology. We describe the system's architecture and theoretical foundations, illustrate its use in teacher training and research, and discuss how it embodies principles of social learning, cognitive apprenticeship, deliberate practice, and meta-cognition.


Learning control strategy in soft robotics through a set of configuration spaces

Ménager, Etienne, Duriez, Christian

arXiv.org Artificial Intelligence

The ability of a soft robot to perform specific tasks is determined by its contact configuration, and transitioning between configurations is often necessary to reach a desired position or manipulate an object. Based on this observation, we propose a method for controlling soft robots that involves defining a graph of configuration spaces. Different agents, whether learned or not (convex optimization, expert trajectory, and collision detection), use the structure of the graph to solve the desired task. The graph and the agents are part of the prior knowledge that is intuitively integrated into the learning process. They are used to combine different optimization methods, improve sample efficiency, and provide interpretability. We construct the graph based on the contact configurations and demonstrate its effectiveness through two scenarios, a deformable beam in contact with its environment and a soft manipulator, where it outperforms the baseline in terms of stability, learning speed, and interpretability.


Hierarchical Deep Reinforcement Learning Approach for Multi-Objective Scheduling With Varying Queue Sizes

Birman, Yoni, Ido, Ziv, Katz, Gilad, Shabtai, Asaf

arXiv.org Machine Learning

Multi-objective task scheduling (MOTS) is the task scheduling while optimizing multiple and possibly contradicting constraints. A challenging extension of this problem occurs when every individual task is a multi-objective optimization problem by itself. While deep reinforcement learning (DRL) has been successfully applied to complex sequential problems, its application to the MOTS domain has been stymied by two challenges. The first challenge is the inability of the DRL algorithm to ensure that every item is processed identically regardless of its position in the queue. The second challenge is the need to manage large queues, which results in large neural architectures and long training times. In this study we present MERLIN, a robust, modular and near-optimal DRL-based approach for multi-objective task scheduling. MERLIN applies a hierarchical approach to the MOTS problem by creating one neural network for the processing of individual tasks and another for the scheduling of the overall queue. In addition to being smaller and with shorted training times, the resulting architecture ensures that an item is processed in the same manner regardless of its position in the queue. Additionally, we present a novel approach for efficiently applying DRL-based solutions on very large queues, and demonstrate how we effectively scale MERLIN to process queue sizes that are larger by orders of magnitude than those on which it was trained. Extensive evaluation on multiple queue sizes show that MERLIN outperforms multiple well-known baselines by a large margin (>22%).


On Memory Mechanism in Multi-Agent Reinforcement Learning

Zhou, Yilun, Asher, Derrik E., Waytowich, Nicholas R., Shah, Julie A.

arXiv.org Artificial Intelligence

Multi-agent reinforcement learning (MARL) extends (single-agent) reinforcement learning (RL) by introducing additional agents and (potentially) partial observability of the environment. Consequently, algorithms for solving MARL problems incorporate various extensions beyond traditional RL methods, such as a learned communication protocol between cooperative agents that enables exchange of private information or adaptive modeling of opponents in competitive settings. One popular algorithmic construct is a memory mechanism such that an agent's decisions can depend not only upon the current state but also upon the history of observed states and actions. In this paper, we study how a memory mechanism can be useful in environments with different properties, such as observability, internality and presence of a communication channel. Using both prior work and new experiments, we show that a memory mechanism is helpful when learning agents need to model other agents and/or when communication is constrained in some way; however we must to be cautious of agents achieving effective memoryfulness through other means.