Goto

Collaborating Authors

 team structure


Can Lessons From Human Teams Be Applied to Multi-Agent Systems? The Role of Structure, Diversity, and Interaction Dynamics

Muralidharan, Rasika, Kwak, Haewoon, An, Jisun

arXiv.org Artificial Intelligence

Multi-Agent Systems (MAS) with Large Language Model (LLM)-powered agents are gaining attention, yet fewer studies explore their team dynamics. Inspired by human team science, we propose a multi-agent framework to examine core aspects of team science: structure, diversity, and interaction dynamics. We evaluate team performance across four tasks: CommonsenseQA, StrategyQA, Social IQa, and Latent Implicit Hate, spanning commonsense and social reasoning. Our results show that flat teams tend to perform better than hierarchical ones, while diversity has a nuanced impact. Interviews suggest agents are overconfident about their team performance, yet post-task reflections reveal both appreciation for collaboration and challenges in integration, including limited conversational coordination.


AutoBnB-RAG: Enhancing Multi-Agent Incident Response with Retrieval-Augmented Generation

Liu, Zefang, Anwar, Arman

arXiv.org Artificial Intelligence

Incident response (IR) requires fast, coordinated, and well-informed decision-making to contain and mitigate cyber threats. While large language models (LLMs) have shown promise as autonomous agents in simulated IR settings, their reasoning is often limited by a lack of access to external knowledge. In this work, we present AutoBnB-RAG, an extension of the AutoBnB framework that incorporates retrieval-augmented generation (RAG) into multi-agent incident response simulations. Built on the Backdoors & Breaches (B&B) tabletop game environment, AutoBnB-RAG enables agents to issue retrieval queries and incorporate external evidence during collaborative investigations. We introduce two retrieval settings: one grounded in curated technical documentation (RAG-Wiki), and another using narrative-style incident reports (RAG-News). We evaluate performance across eight team structures, including newly introduced argumentative configurations designed to promote critical reasoning. To validate practical utility, we also simulate real-world cyber incidents based on public breach reports, demonstrating AutoBnB-RAG's ability to reconstruct complex multi-stage attacks. Our results show that retrieval augmentation improves decision quality and success rates across diverse organizational models. This work demonstrates the value of integrating retrieval mechanisms into LLM-based multi-agent systems for cybersecurity decision-making.


Multi-Agent Collaboration in Incident Response with Large Language Models

Liu, Zefang

arXiv.org Artificial Intelligence

Incident response (IR) is a critical aspect of cybersecurity, requiring rapid decision-making and coordinated efforts to address cyberattacks effectively. Leveraging large language models (LLMs) as intelligent agents offers a novel approach to enhancing collaboration and efficiency in IR scenarios. This paper explores the application of LLM-based multi-agent collaboration using the Backdoors & Breaches framework, a tabletop game designed for cybersecurity training. We simulate real-world IR dynamics through various team structures, including centralized, decentralized, and hybrid configurations. By analyzing agent interactions and performance across these setups, we provide insights into optimizing multi-agent collaboration for incident response. Our findings highlight the potential of LLMs to enhance decision-making, improve adaptability, and streamline IR processes, paving the way for more effective and coordinated responses to cyber threats.


Exploring the Benefits of Teams in Multiagent Learning

Radke, David, Larson, Kate, Brecht, Tim

arXiv.org Artificial Intelligence

For problems requiring cooperation, many multiagent systems implement solutions among either individual agents or across an entire population towards a common goal. Multiagent teams are primarily studied when in conflict; however, organizational psychology (OP) highlights the benefits of teams among human populations for learning how to coordinate and cooperate. In this paper, we propose a new model of multiagent teams for reinforcement learning (RL) agents inspired by OP and early work on teams in artificial intelligence. We validate our model using complex social dilemmas that are popular in recent multiagent RL and find that agents divided into teams develop cooperative pro-social policies despite incentives to not cooperate. Furthermore, agents are better able to coordinate and learn emergent roles within their teams and achieve higher rewards compared to when the interests of all agents are aligned.


Towards a Better Understanding of Learning with Multiagent Teams

Radke, David, Larson, Kate, Brecht, Tim, Tilbury, Kyle

arXiv.org Artificial Intelligence

While it has long been recognized that a team of individual learning agents can be greater than the sum of its parts, recent work has shown that larger teams are not necessarily more effective than smaller ones. In this paper, we study why and under which conditions certain team structures promote effective learning for a population of individual learning agents. We show that, depending on the environment, some team structures help agents learn to specialize into specific roles, resulting in more favorable global results. However, large teams create credit assignment challenges that reduce coordination, leading to large teams performing poorly compared to smaller ones. We support our conclusions with both theoretical analysis and empirical results.


Learning to Learn Group Alignment: A Self-Tuning Credo Framework with Multiagent Teams

Radke, David, Tilbury, Kyle

arXiv.org Artificial Intelligence

Mixed incentives among a population with multiagent teams has been shown to have advantages over a fully cooperative system; however, discovering the best mixture of incentives or team structure is a difficult and dynamic problem. We propose a framework where individual learning agents self-regulate their configuration of incentives through various parts of their reward function. This work extends previous work by giving agents the ability to dynamically update their group alignment during learning and by allowing teammates to have different group alignment. Our model builds on ideas from hierarchical reinforcement learning and meta-learning to learn the configuration of a reward function that supports the development of a behavioral policy. We provide preliminary results in a commonly studied multiagent environment and find that agents can achieve better global outcomes by self-tuning their respective group alignment parameters.


Design Thinking for AI : Sustainable AI Solution Design - Cuelogic Technologies Pvt. Ltd.

#artificialintelligence

It's important to approach the subject area of AI from the philosophy of design thinking. This is, so that there are a structure and an approach that is chartered out in the complex world of AI. Since there is an extensive range of solutions that one can design for AI applications, developers need to remain vigilant about the changing nature of Artificial Intelligence. Therefore, to ensure quality and proper guidance, it's best to think about AI from a design-thinking perspective. This captures much of the innovation that will potentially occur in the AI space so that the problem can be solved using another approach. Additionally, as AI is rapidly evolving, it makes the process of structuring a final proposal that much easier.


AI Advances Facilitate SCRUM Team Construction For Agile Development

#artificialintelligence

Artificial intelligence is the basis for many agile applications. InfoQ author Ben Linders wrote a detailed article about the role of AI in the Agile World. One of the many ways that agile developers are utilizing AI is by streamlining the implementation of Scrum teams. Businesses are facing a fast-paced environment and more organizations are looking for effective ways to keep up. It's all about fulfilling the needs of your customers and improving the speed at which you can put a product on the market.


A Descriptive Model of Robot Team and the Dynamic Evolution of Robot Team Cooperation

Li, Shu-qin, Shuai, Lan, Cheng, Xian-yi, Tang, Zhen-min, Yang, Jing-yu

arXiv.org Artificial Intelligence

At present, the research on robot team cooperation is still in qualitative analysis phase and lacks the description model that can quantitatively describe the dynamical evolution of team cooperative relationships with constantly changeable task demand in Multi-robot field. First this paper whole and static describes organization model HWROM of robot team, then uses Markov course and Bayesian theorem for reference, dynamical describes the team cooperative relationships building. Finally from cooperative entity layer, ability layer and relative layer we research team formation and cooperative mechanism, and discuss how to optimize relative action sets during the evolution. The dynamic evolution model of robot team and cooperative relationships between robot teams proposed and described in this paper can not only generalize the robot team as a whole, but also depict the dynamic evolving process quantitatively. Users can also make the prediction of the cooperative relationship and the action of the robot team encountering new demands based on this model. Journal web page & a lot of robotic related papers www.ars-journal.com