Goto

Collaborating Authors

 Murano, Aniello


A Model Checker for Natural Strategic Ability

arXiv.org Artificial Intelligence

In the last two decades, Alternating-time Temporal Logic (ATL) has been proved to be very useful in modeling strategic reasoning for Multi-Agent Systems (MAS). However, this logic struggles to capture the bounded rationality inherent in human decision-making processes. To overcome these limitations, Natural Alternating-time Temporal Logic (NatATL) has been recently introduced. As an extension of ATL, NatATL incorporates bounded memory constraints into agents' strategies, which allows to resemble human cognitive limitations. In this paper, we present a model checker tool for NatATL specifications - both for memoryless strategies and strategies with recall - integrated into VITAMIN, an open-source model checker designed specifically for MAS verification. By embedding NatATL into VITAMIN, we transform theoretical advancements into a practical verification framework, enabling comprehensive analysis and validation of strategic reasoning in complex multi-agent environments. Our novel tool paves the way for applications in areas such as explainable AI and human-in-the-loop systems, highlighting NatATL's substantial potential.


Module checking of pushdown multi-agent systems

arXiv.org Artificial Intelligence

In this paper, we investigate the module-checking problem of pushdown multi-agent systems (PMS) against ATL and ATL* specifications. We establish that for ATL, module checking of PMS is 2EXPTIME-complete, which is the same complexity as pushdown module-checking for CTL. On the other hand, we show that ATL* module-checking of PMS turns out to be 4EXPTIME-complete, hence exponentially harder than both CTL* pushdown module-checking and ATL* model-checking of PMS. Our result for ATL* provides a rare example of a natural decision problem that is elementary yet but with a complexity that is higher than triply exponential-time.


Natural Strategic Ability in Stochastic Multi-Agent Systems

arXiv.org Artificial Intelligence

Strategies synthesized using formal methods can be complex and often require infinite memory, which does not correspond to the expected behavior when trying to model Multi-Agent Systems (MAS). To capture such behaviors, natural strategies are a recently proposed framework striking a balance between the ability of agents to strategize with memory and the model-checking complexity, but until now has been restricted to fully deterministic settings. For the first time, we consider the probabilistic temporal logics PATL and PATL* under natural strategies (NatPATL and NatPATL*, resp.). As main result we show that, in stochastic MAS, NatPATL model-checking is NP-complete when the active coalition is restricted to deterministic strategies. We also give a 2NEXPTIME complexity result for NatPATL* with the same restriction. In the unrestricted case, we give an EXPSPACE complexity for NatPATL and 3EXPSPACE complexity for NatPATL*.


Scalable Verification of Strategy Logic through Three-valued Abstraction

arXiv.org Artificial Intelligence

The model checking problem for multi-agent systems against Strategy Logic specifications is known to be non-elementary. On this logic several fragments have been defined to tackle this issue but at the expense of expressiveness. In this paper, we propose a three-valued semantics for Strategy Logic upon which we define an abstraction method. We show that the latter semantics is an approximation of the classic two-valued one for Strategy Logic. Furthermore, we extend MCMAS, an open-source model checker for multi-agent specifications, to incorporate our abstraction method and present some promising experimental results.


Strategic Abilities of Forgetful Agents in Stochastic Environments

arXiv.org Artificial Intelligence

In this paper, we investigate the probabilistic variants of the strategy logics ATL and ATL* under imperfect information. Specifically, we present novel decidability and complexity results when the model transitions are stochastic and agents play uniform strategies. That is, the semantics of the logics are based on multi-agent, stochastic transition systems with imperfect information, which combine two sources of uncertainty, namely, the partial observability agents have on the environment, and the likelihood of transitions to occur from a system state. Since the model checking problem is undecidable in general in this setting, we restrict our attention to agents with memoryless (positional) strategies. The resulting setting captures the situation in which agents have qualitative uncertainty of the local state and quantitative uncertainty about the occurrence of future events. We illustrate the usefulness of this setting with meaningful examples.


Discounting in Strategy Logic

arXiv.org Artificial Intelligence

Discounting is an important dimension in multi-agent systems as long as we want to reason about strategies and time. It is a key aspect in economics as it captures the intuition that the far-away future is not as important as the near future. Traditional verification techniques allow to check whether there is a winning strategy for a group of agents but they do not take into account the fact that satisfying a goal sooner is different from satisfying it after a long wait. In this paper, we augment Strategy Logic with future discounting over a set of discounted functions D, denoted SLdisc[D]. We consider "until" operators with discounting functions: the satisfaction value of a specification in SLdisc[D] is a value in [0, 1], where the longer it takes to fulfill requirements, the smaller the satisfaction value is. We motivate our approach with classical examples from Game Theory and study the complexity of model-checking SLdisc[D]-formulas.


Equilibria for Games with Combined Qualitative and Quantitative Objectives

arXiv.org Artificial Intelligence

From this standpoint, agents/processes in a multi-agent system can be understood as players in a game played on a directed graph (a transition system), acting strategically and independently in pursuit of their preferences. In this setting, possible behaviours of agents correspond to the strategies of players. One important strand of work in this tradition has been the development of techniques for reasoning about what properties players (or coalitions of players) can bring about (i.e., whether they have "winning strategies" for certain conditions) [4]. Recently, attention has begun to shift from the analysis of strategic ability to the analysis of the equilibrium properties of such systems. A typical question in this setting is whether a particular temporal property will hold under the assumption that players select strategies that collectively form a Nash equilibrium [35]. A fundamental question in this work is how the preferences of agents are represented.


Changing Observations in Epistemic Temporal Logic

arXiv.org Artificial Intelligence

We study dynamic changes of agents' observational power in logics of knowledge and time. We consider CTL*K, the extension of CTL* with knowledge operators, and enrich it with a new operator that models a change in an agent's way of observing the system. We extend the classic semantics of knowledge for perfect-recall agents to account for changes of observation, and we show that this new operator strictly increases the expressivity of CTL*K. We reduce the model-checking problem for our logic to that for CTL*K, which is known to be decidable. This provides a solution to the model-checking problem for our logic, but its complexity is not optimal. Indeed we provide a direct decision procedure with better complexity.


Planning and Synthesis Under Assumptions

arXiv.org Artificial Intelligence

In Reasoning about Action and Planning, one synthesizes the agent plan by taking advantage of the assumption on how the environment works (that is, one exploits the environment's effects, its fairness, its trajectory constraints). In this paper we study this form of synthesis in detail. We consider assumptions as constraints on the possible strategies that the environment can have in order to respond to the agent's actions. Such constraints may be given in the form of a planning domain (or action theory), as linear-time formulas over infinite or finite runs, or as a combination of the two (e.g., FOND under fairness). We argue though that not all assumption specifications are meaningful: they need to be consistent, which means that there must exist an environment strategy fulfilling the assumption in spite of the agent actions. For such assumptions, we study how to do synthesis/planning for agent goals, ranging from a classical reachability to goal on traces specified in LTL and LTLf/LDLf, characterizing the problem both mathematically and algorithmically.


Verifying and Synthesising Multi-Agent Systems against One-Goal Strategy Logic Specifications

AAAI Conferences

Strategy Logic (SL) has recently come to the fore as a useful specification language to reason about multi-agent systems. Its one-goal fragment, or SL[1G], is of particular interest as it strictly subsumes widely used logics such as ATL*, while maintaining attractive complexity features. In this paper we put forward an automata-based methodology for verifying and synthesising multi-agent systems against specifications given in SL[1G]. We show that the algorithm is sound and optimal from a computational point of view. A key feature of the approach is that all data structures and operations on them can be performed on BDDs. We report on a BDD-based model checker implementing the algorithm and evaluate its performance on the fair process scheduler synthesis.