Collaborating Authors

Models we Can Trust: Toward a Systematic Discipline of (Agent-Based) Model Interpretation and Validation Artificial Intelligence

We advocate the development of a discipline of interacting with and extracting information from models, both mathematical (e.g. game-theoretic ones) and computational (e.g. agent-based models). We outline some directions for the development of a such a discipline: - the development of logical frameworks for the systematic formal specification of stylized facts and social mechanisms in (mathematical and computational) social science. Such frameworks would bring to attention new issues, such as phase transitions, i.e. dramatical changes in the validity of the stylized facts beyond some critical values in parameter space. We argue that such statements are useful for those logical frameworks describing properties of ABM. - the adaptation of tools from the theory of reactive systems (such as bisimulation) to obtain practically relevant notions of two systems "having the same behavior". - the systematic development of an adversarial theory of model perturbations, that investigates the robustness of conclusions derived from models of social behavior to variations in several features of the social dynamics. These may include: activation order, the underlying social network, individual agent behavior.

Rational Verification: From Model Checking to Equilibrium Checking

AAAI Conferences

Rational verification is concerned with establishing whether a given temporal logic formula φ is satisfied in some or all equilibrium computations of a multi-agent system – that is, whether the system will exhibit the behaviour φ under the assumption that agents within the system act rationally in pursuit of their preferences. After motivating and introducing the framework of rational verification, we present formal models through which rational verification can be studied, and survey the complexity of key decision problems. We give an overview of a prototype software tool for rational verification, and conclude with a discussion and related work.

A Formal Framework for Reasoning about Agents' Independence in Self-organizing Multi-agent Systems Artificial Intelligence

Self-organization is a process where a stable pattern is formed by the cooperative behavior between parts of an initially disordered system without external control or influence. It has been introduced to multi-agent systems as an internal control process or mechanism to solve difficult problems spontaneously. However, because a self-organizing multi-agent system has autonomous agents and local interactions between them, it is difficult to predict the behavior of the system from the behavior of the local agents we design. This paper proposes a logic-based framework of self-organizing multi-agent systems, where agents interact with each other by following their prescribed local rules. The dependence relation between coalitions of agents regarding their contributions to the global behavior of the system is reasoned about from the structural and semantic perspectives. We show that the computational complexity of verifying such a self-organizing multi-agent system is in exponential time. We then combine our framework with graph theory to decompose a system into different coalitions located in different layers, which allows us to verify agents' full contributions more efficiently. The resulting information about agents' full contributions allows us to understand the complex link between local agent behavior and system level behavior in a self-organizing multi-agent system. Finally, we show how we can use our framework to model a constraint satisfaction problem.

Paradigms of Computational Agency Artificial Intelligence

Today's information systems are complex, distributed, and need to scale over millions of users and a variety of devices, with guaranteed uptimes. As a result, top-down approaches for systems design and engineering are becoming increasingly infeasible. Starting sometime in the 1990s, a branch of systems engineering, has approached the problem of systemic complexity in a bottom-up fashion, by designing "autonomous" or "intelligent" agents that can proactively and autonomously act and decide on their own-to address specific, local issues pertaining to their immediate requirements. They also can communicate and coordinate with one another to jointly solve larger problems. The autonomous nature of agents require some form of a rationale that justifies their actions. Given that, objectoriented modeling had attracted mainstream attention at that time, the distinction between mechanistic "objects" and autonomous "agents" were often summarized with this slogan (Jennings et al., 1998): Objects do it for free, agents do it for money.

Biologically-Inspired Control for Multi-Agent Self-Adaptive Tasks

AAAI Conferences

Decentralized agent groups typically require complex mechanisms to accomplish coordinated tasks. In contrast, biological systems can achieve intelligent group behaviors with each agent performing simple sensing and actions. We summarize our recent papers on a biologically-inspired control framework for multi-agent tasks that is based on a simple and iterative control law. We theoretically analyze important aspects of this decentralized approach, such as the convergence and scalability, and further demonstrate how this approach applies to real-world applications with a diverse set of multi-agent applications. These results provide a deeper understanding of the contrast between centralized and decentralized algorithms in multi-agent tasks and autonomous robot control.