Wooldridge, Michael


Rational Verification: From Model Checking to Equilibrium Checking

AAAI Conferences

Rational verification is concerned with establishing whether a given temporal logic formula φ is satisfied in some or all equilibrium computations of a multi-agent system – that is, whether the system will exhibit the behaviour φ under the assumption that agents within the system act rationally in pursuit of their preferences. After motivating and introducing the framework of rational verification, we present formal models through which rational verification can be studied, and survey the complexity of key decision problems. We give an overview of a prototype software tool for rational verification, and conclude with a discussion and related work.


Preface

AAAI Conferences

This is an exciting time to be an artificial intelligence researcher. AI technologies and applications have truly entered our everyday lives, with AI systems in use throughout society. Against this backdrop of AI’s remarkable success, the Twenty-Fourth International Joint Conference on Artificial Intelligence (IJCAI-2015), to be held in Buenos Aires, Argentina between 25 and 31 July 2015, is poised to break several records. This is the first time the flagship international AI conference has been held in South America, and the number of submissions to the technical program has reached an historical high. These proceedings collect some of the most exciting research taking place in AI today and offer a window into the future. The theme of this year’s conference is Artificial Intelligence and Arts. Being held in Argentina, the home of Tango, the conference will feature invited talks, performances, demos and a technical track dedicated to the exploration and celebration of AI’s growing role in the Arts, both in enriching and producing Arts and in injecting art into AI to make it an elegant and more accessible scientific discipline.


A Graphical Representation for Games in Partition Function Form

AAAI Conferences

We propose a novel representation for coalitional games with externalities, called Partition Decision Trees. This representation is based on rooted directed trees, where non-leaf nodes are labelled with agents' names, leaf nodes are labelled with payoff vectors, and edges indicate membership of agents in coalitions. We show that this representation is fully expressive, and for certain classes of games significantly more concise than an extensive representation. Most importantly, Partition Decision Trees are the first formalism in the literature under which most of the direct extensions of the Shapley value to games with externalities can be computed in polynomial time.


Efficient Computation of Semivalues for Game-Theoretic Network Centrality

AAAI Conferences

Solution concepts from cooperative game theory, such as the Shapley value or the Banzhaf index, have recently been advocated as interesting extensions of standard measures of node centrality in networks. While this direction of research is promising, the computation of game-theoretic centrality can be challenging. In an attempt to address the computational issues of game-theoretic network centrality, we present a generic framework for constructing game-theoretic network centralities. We prove that all extensions that can be expressed in this framework are computable in polynomial time. Using our framework, we present the first game-theoretic extensions of weighted and normalized degree centralities, impact factor centrality,distance-scaled and normalized betweenness centrality,and closeness and normalized closeness centralities.


Reasoning About the Transfer of Control

arXiv.org Artificial Intelligence

We present DCL-PC: a logic for reasoning about how the abilities of agents and coalitions of agents are altered by transferring control from one agent to another. The logical foundation of DCL-PC is CL-PC, a logic for reasoning about cooperation in which the abilities of agents and coalitions of agents stem from a distribution of atomic Boolean variables to individual agents -- the choices available to a coalition correspond to assignments to the variables the coalition controls. The basic modal constructs of DCL-PC are of the form coalition C can cooperate to bring about phi. DCL-PC extends CL-PC with dynamic logic modalities in which atomic programs are of the form agent i gives control of variable p to agent j; as usual in dynamic logic, these atomic programs may be combined using sequence, iteration, choice, and test operators to form complex programs. By combining such dynamic transfer programs with cooperation modalities, it becomes possible to reason about how the power of agents and coalitions is affected by the transfer of control. We give two alternative semantics for the logic: a direct semantics, in which we capture the distributions of Boolean variables to agents; and a more conventional Kripke semantics. We prove that these semantics are equivalent, and then present an axiomatization for the logic. We investigate the computational complexity of model checking and satisfiability for DCL-PC, and show that both problems are PSPACE-complete (and hence no worse than the underlying logic CL-PC). Finally, we investigate the characterisation of control in DCL-PC. We distinguish between first-order control -- the ability of an agent or coalition to control some state of affairs through the assignment of values to the variables under the control of the agent or coalition -- and second-order control -- the ability of an agent to exert control over the control that other agents have by transferring variables to other agents. We give a logical characterisation of second-order control.


Logics for Multiagent Systems

AI Magazine

We present a brief survey of logics for reasoning about multiagent systems. We focus on two paradigms: logics for cognitive models of agency, and logics used to model the strategic structure of a multiagent system.


Logics for Multiagent Systems

AI Magazine

We present a brief survey of logics for reasoning about multiagent systems. We focus on two paradigms: logics for cognitive models of agency, and logics used to model the strategic structure of a multiagent system.


Intentions in Equilibrium

AAAI Conferences

Intentions have been widely studied in AI, both in the context of decision-making within individual agents and in multi-agent systems. Work on intentions in multi-agent systems has focused on joint intention models, which characterise the mental state of agents with a shared goal engaged in teamwork. In the absence of shared goals, however, intentions play another crucial role in multi-agent activity: they provide a basis around which agents can mutually coordinate activities. Models based on shared goals do not attempt to account for or explain this role of intentions. In this paper, we present a formal model of multi-agent systems in which belief-desire-intention agents choose their intentions taking into account the intentions of others. To understand rational mental states in such a setting, we formally define and investigate notions of multi-agent intention equilibrium, which are related to equilibrium concepts in game theory.


How Inappropriately Heavyweight AI Solutions Dragged Down A Startup (and Made Me Realize that Industrial Salaries Are High for a Good Reason)

AI Magazine

Ten years ago I was a junior faculty member in a UK university, doing research into the theoretical foundations of multiagent systems. I enjoyed the research, but not the salary. The opportunity arose to work for a startup company at three times my university salary, and the company had already hired some excellent agent researchers that I knew, respected, and liked from conferences and workshops. The job seemed too good to be true; and of course, it was.


How Inappropriately Heavyweight AI Solutions Dragged Down A Startup (and Made Me Realize that Industrial Salaries Are High for a Good Reason)

AI Magazine

Ten years ago I was a junior faculty member in a UK university, doing research into the theoretical foundations of multiagent systems. I enjoyed the research, but not the salary. The opportunity arose to work for a startup company at three times my university salary, and the company had already hired some excellent agent researchers that I knew, respected, and liked from conferences and workshops. The job seemed too good to be true; and of course, it was.