Agent Oriented Design of a Soccer Robot Team

AAAI Conferences

The multi-agent paradigm is widely used to provide solutions to a variety of organizational problems related to the collective achievement of one or more tasks. All these problems share a common difficulty of design: how to proceed from a global specification of the collective task to the specification of the local behaviors, which are to be provided to the agents? We have defined the Cassiopeia method whose specificity is to articulate the design of a multi-agent system around the notion of organization. This paper reports the use of this method for designing and implementing the organization of a soccer robot team. We show why we chose this application and how we designed it, we discuss its interest and inherent difficulties, in order to clearly express the needs for a design methodology dedicated to DAI. Introduction The multi-agent paradigm is widely used to provide solutions to a variety of organizational problems related to the collective achievement of one or more tasks: computer supported cooperative work, flexible workshop or network management, distributed process control, or coordination of patrols of drones (Avouris & Gasser 1992) (Werner Demazeau 1992) (Demazeau & Muller 1991). All these problems share a common difficulty of design: how to proceed from a global specification of the collective task to the specification of the individual behaviors, which are to be provided to the agents that achieve the task. A problem of organization has to be solved, most of the time in a dynamic fashion, so as to obtain the collective achievement of the considered task.

Overwatch: An Educational Testbed for Multi-Robot Experimentation

AAAI Conferences

Educators who wish to engage their students in multi-agent experimentation and learning need an inexpensive multi-robot system that leverages existing equipment and open-source software. This paper proposes Overwatch as an inexpensive educational tool for teaching and experimenting in multi-robot systems. The interaction of multiple agents within a single environment is an important area of study. It is vital that agents within the environment perceive other agents as intelligent, acting within the environment as cooperative teammates or as competitive members of another team. To do so, the system must meet three goals: first, to allow multiple robots to communicate and coordinate; second, to localize within a shared global coordinate system; third, to recognize their teammates and other teams. The cost and scale of such experimental platforms places them outside the reach of many educational institutions or limits the number of agents that are interacting within the system \cite{Liu201160}. The goal of Overwatch is to create an experimental platform for multi-agent systems that is comprised of much smaller, albeit less capable, robots, many of which are prevalent in academic institutions already. Making use of available open-source libraries and utilizing lower cost robots, such as Scribblers, allows for experiments with many agents. This enables Overwatch to fit into the budget limitations of an academic setting. The Overwatch platform provides the Scribblers with global localization capabilities. This paper presents the system in detail and includes experiments to show its ability to localize, interact with other agents, and coordinate behaviors with these other agents. Additionally, the details to setup this system are also included.

Vision, Strategy, and Localization Using the Sony Robots at RoboCup-98

AI Magazine

Sony has provided a robot platform for research and development in physical agents, namely, fully autonomous legged robots. In this article, we describe our work using Sony's legged robots to participate at the RoboCup-98 legged robot demonstration and competition. Robotic soccer represents a challenging environment for research in systems with multiple robots that need to achieve concrete objectives, particularly in the presence of an adversary. Furthermore, RoboCup offers an excellent opportunity for robot entertainment. We introduce the RoboCup context and briefly present Sony's legged robot. We developed a vision-based navigation and a Bayesian localization algorithm. Team strategy is achieved through predefined behaviors and learning by instruction.

Ad Hoc Autonomous Agent Teams: Collaboration without Pre-Coordination

AAAI Conferences

As autonomous agents proliferate in the real world, both in software and robotic settings, they will increasingly need to band together for cooperative activities with previously unfamiliar teammates. In such ad hoc team settings, team strategies cannot be developed a priori. Rather, an agent must be prepared to cooperate with many types of teammates: it must collaborate without pre-coordination. This paper challenges the AI community to develop theory and to implement prototypes of ad hoc team agents. It defines the concept of ad hoc team agents, specifies an evaluation paradigm, and provides examples of possible theoretical and empirical approaches to challenge. The goal is to encourage progress towards this ambitious, newly realistic, and increasingly important research goal.

After Mastering Go and StarCraft, DeepMind Takes on Soccer


Having notched impressive victories over human professionals in Go, Atari Games, and most recently StarCraft 2 -- Google's DeepMind team has now turned its formidable research efforts to soccer. In a paper released last week, the UK AI company demonstrates a novel machine learning method that trains a team of AI agents to play a simulated version of "the beautiful game." Gaming, AI and soccer fans hailed DeepMind's latest innovation on social media, with comments like "You should partner with EA Sports for a FIFA environment!" Machine learning, and particularly deep reinforcement learning, has in recent years achieved remarkable success across a wide range of competitive games. Collaborative-multi-agent games however remained a relatively difficult research domain.