Evolving Cooperation Strategies

AAAI Conferences

We use the predator-prey domain to evaluate onr approach. The same program is used by all four predators. Each program in the population is a strategy for implicit cooperation to capture the prey. Korf's original work (Korf 1992) used the Manhattan distance (MDO)(sum ofa and y distances between agents) and max norm (maximum of z and y distances To be more realistic, we ran experiments with the same strategies where all agents move at once (respectively MD and MN). The GP evolved strategy is also run under similar conditions.


Effects of Communication on the Evolution of Squad Behaviours

AAAI Conferences

As the non-playable characters (NPCs) of squad-based shooter computer games share a common goal, they should work together in teams and display cooperative behaviours that are tactically sound. Our research examines genetic programming (GP) as a technique to automatically develop effective team behaviours for shooter games. GP has been used to evolve teams capable of defeating a single powerful enemy agent in a number of environments without the use of any explicit team communication. The aim of this paper is to explore the effects of communication on the evolution of effective squad behaviours. Thus, NPCs are given the ability to communicate their perceived information during evolution. The results show that communication between team members enables an improvement in average team effectiveness.


The Influence of a Domain's Behavioral Laws on On-Line Learning

AAAI Conferences

In multiagent systems, the potential interactions between agents is combinatorial. Explicitly coding in each behavioral strategy is not an option. The agents can start with a default set of behavioral rules and adapt them online to fit in with their experiences. We investigate perhaps the simplest testbed for multiagent systems: the pursuit game. Four predator agents try to capture a prey agent. We show how different assumptions about the domain can drastically alter the need for learning. In one formulation there is no need for learning at all, simple greedy agents can effectively capture the prey (Korf 1992). As we remove layers abstraction, we find that learning is necessary for the predator agents to effectively capture the prey (Haynes & Sen 1996). Introduction The field of multiagent systems (MAS), also traditionally referred to as distributed artificial intelligence (DAI), is concerned with the behavior of computational agents when they must interact. One line of research is into the dynamics of cooperation of the group or team (Haynes Sen 1997c; Sandholm: Lesser 1995). Another line of research is into the dynamics of competition as either individual agents or groups of agents vie for resources in artificial economies (Mullen &: Wellman 1995). An individual agent in a group must balance the pressure of competition versus that of cooperation.


Co-Evolving Team Capture Strategies for Dissimilar Robots

AAAI Conferences

Evolving team members to act cohesively is a complex and challenging problem. To allow the greatest range of solutions in team problem solving, heterogeneous agents are desirable. To produce highly specialized agents, team members should be evolved in separate populations. Co-evolution in separate populations requires a system for selecting suitable partners for evaluation at trial time. Selecting too many partners for evaluation drives computation time to unreasonable levels, while selecting too few partners blinds the GA from recognizing highly fit individuals. In previous work, we employed a method based on punctuated anytime learning which periodically tests a number of partner combinations to select a single individual from each population to be used at trail time. We began testing our method in simulation using a two-agent box pushing task. We then expanded our research by simulating a predator-prey scenario in which all the agents had the exact same capabilities. In this paper, we report the expansion of our work by applying this method of team learning to five dissimilar robots.


Learning Cases to Resolve Conflicts and Improve Group Behavior

AAAI Conferences

Groups of agents following fixed behavioral rules can be limited in performance and efficiency. Adaptability and flexibility are key components of intelligent behavior which allow agent groups to improve performance in a given domain using prior problem solving experience. We motivate the usefulness of individual learning by group members in the context of overall group behavior. In particular, we propose a f amework in which individual group members learn cases to improve their model of other group members. We use a testbed problem t om the distributed AI literature to show that simultaneous learning by group members can lead to significant improvement in group performance and efficiency over agent groups following static behavioral rules. Introduction An agent is rational if when faced with a choice from a set of actions, it chooses the one that maximizes the expected utilities of those actions, hnplicit in this definition is the assumption that the preference of the agent for different actions is based on the utilities resulting from those actions. A problem in multiagent systems is that the best action for Agent A might be in conflict with that for another Agent Aj. Agent A, then, should try to model the behavior of Aj, and incorporate that into its expected utility calculations (Gmytrasiewicz & Durfee 1995). The optimal action for an individual agent might not be the optimal action for its group. Thus an agent can evaluate the utility of its actions on two levels: individual and group.