Malvone, Vadim
A Model Checker for Natural Strategic Ability
Aruta, Marco, Malvone, Vadim, Murano, Aniello
In the last two decades, Alternating-time Temporal Logic (ATL) has been proved to be very useful in modeling strategic reasoning for Multi-Agent Systems (MAS). However, this logic struggles to capture the bounded rationality inherent in human decision-making processes. To overcome these limitations, Natural Alternating-time Temporal Logic (NatATL) has been recently introduced. As an extension of ATL, NatATL incorporates bounded memory constraints into agents' strategies, which allows to resemble human cognitive limitations. In this paper, we present a model checker tool for NatATL specifications - both for memoryless strategies and strategies with recall - integrated into VITAMIN, an open-source model checker designed specifically for MAS verification. By embedding NatATL into VITAMIN, we transform theoretical advancements into a practical verification framework, enabling comprehensive analysis and validation of strategic reasoning in complex multi-agent environments. Our novel tool paves the way for applications in areas such as explainable AI and human-in-the-loop systems, highlighting NatATL's substantial potential.
3vLTL: A Tool to Generate Automata for Three-valued LTL
Belardinelli, Francesco, Ferrando, Angelo, Malvone, Vadim
Multi-valued logics have a long tradition in the literature on system verification, including run-time verification. However, comparatively fewer model-checking tools have been developed for multi-valued specification languages. We present 3vLTL, a tool to generate Buchi automata from formulas in Linear-time Temporal Logic (LTL) interpreted on a three-valued semantics. Given an LTL formula, a set of atomic propositions as the alphabet for the automaton, and a truth value, our procedure generates a Buchi automaton that accepts all the words that assign the chosen truth value to the LTL formula. Given the particular type of the output of the tool, it can also be seamlessly processed by third-party libraries in a natural way. That is, the Buchi automaton can then be used in the context of formal verification to check whether an LTL formula is true, false, or undefined on a given model.
Scalable Verification of Strategy Logic through Three-valued Abstraction
Belardinelli, Francesco, Ferrando, Angelo, Jamroga, Wojciech, Malvone, Vadim, Murano, Aniello
The model checking problem for multi-agent systems against Strategy Logic specifications is known to be non-elementary. On this logic several fragments have been defined to tackle this issue but at the expense of expressiveness. In this paper, we propose a three-valued semantics for Strategy Logic upon which we define an abstraction method. We show that the latter semantics is an approximation of the classic two-valued one for Strategy Logic. Furthermore, we extend MCMAS, an open-source model checker for multi-agent specifications, to incorporate our abstraction method and present some promising experimental results.
Natural Strategic Abilities in Voting Protocols
Jamroga, Wojciech, Kurpiewski, Damian, Malvone, Vadim
Security properties are often focused on the technological side of the system. One implicitly assumes that the users will behave in the right way to preserve the property at hand. In real life, this cannot be taken for granted. In particular, security mechanisms that are difficult and costly to use are often ignored by the users, and do not really defend the system against possible attacks. Here, we propose a graded notion of security based on the complexity of the user's strategic behavior. More precisely, we suggest that the level to which a security property $\varphi$ is satisfied can be defined in terms of (a) the complexity of the strategy that the voter needs to execute to make $\varphi$ true, and (b) the resources that the user must employ on the way. The simpler and cheaper to obtain $\varphi$, the higher the degree of security. We demonstrate how the idea works in a case study based on an electronic voting scenario. To this end, we model the vVote implementation of the \Pret voting protocol for coercion-resistant and voter-verifiable elections. Then, we identify "natural" strategies for the voter to obtain receipt-freeness, and measure the voter's effort that they require. We also look at how hard it is for the coercer to compromise the election through a randomization attack.
The Impact of Strategies and Information in Model Checking for Multi-Agent Systems
Malvone, Vadim
System correctness is one of the most crucial and challenging objectives in software and hardware systems. With the increasing evolution of connected and distributed systems, ensuring their correctness requires the use of formal verification for multi-agent systems. In this paper, we present a summary of certain results on model checking for multi-agent systems that derive from the selection of strategies and information for agents. Additionally, we discuss some open directions for future research.
Capacity ATL
Ballot, Gabriel, Malvone, Vadim, Leneutre, Jean, Laarouchi, Youssef
Model checking strategic abilities was successfully developed and applied since the early 2000s to ensure properties in Multi-Agent System. In this paper, we introduce the notion of capacities giving different abilities to an agent. This applies naturally to systems where multiple entities can play the same role in the game, such as different client versions in protocol analysis, different robots in heterogeneous fleets, different personality traits in social structure modeling, or different attacker profiles in a cybersecurity setting. With the capacity of other agents being unknown at the beginning of the game, the longstanding problems of imperfect information arise. Our contribution is the following: (i) we define a new class of concurrent game structures where the agents have different capacities that modify their action list and (ii) we introduce a logic extending Alternating-time Temporal Logic to reason about these games.
Towards the Combination of Model Checking and Runtime Verification on Multi-Agent Systems
Ferrando, Angelo, Malvone, Vadim
Intelligent systems, such as Multi-Agent Systems (MAS), can be seen as a set of intelligent entities capable of proactively decide how to act to fulfill their own goals. These entities, called generally agents, are notoriously autonomous, i.e., they do not expect input from an user to act, and social, i.e., they usually communicate amongst each other to achieve common goals. Software systems are not easy to trust in general. This is especially true in the case of complex and distributed systems, such as MAS. Because of this, we need verification techniques to verify that such systems behave as expected. More specifically, in the case of MAS, it is relevant to know whether the agents are capable of achieving their own goals, by themselves or by collaborating with other agents by forming a coalition. This is usually referred to as the process of finding a strategy for the agent(s). A well-known formalism for reasoning about strategic behaviours in MAS is Alternating-time Temporal Logic (AT L) [1]. Before verifying AT L specifications, two questions need to be answered: (i) does each agent know everything about the system?
Approximating Perfect Recall when Model Checking Strategic Abilities: Theory and Applications
Belardinelli, Francesco (Imperial College London, United Kingdom and Laboratoire IBISC, Université d'Evry, France) | Lomuscio, Alessio (Imperial College London United Kingdom) | Malvone, Vadim | Yu, Emily ( Johannes Kepler University Linz, Austria)
The model checking problem for multi-agent systems against specifications in the alternating-time temporal logic ATL, hence ATL∗, under perfect recall and imperfect information is known to be undecidable. To tackle this problem, in this paper we investigate a notion of bounded recall under incomplete information. We present a novel three-valued semantics for ATL∗ in this setting and analyse the corresponding model checking problem. We show that the three-valued semantics here introduced is an approximation of the classic two-valued semantics, then give a sound, albeit partial, algorithm for model checking two-valued perfect recall via its approximation as three-valued bounded recall. Finally, we extend MCMAS, an open-source model checker for ATL and other agent specifications, to incorporate bounded recall; we illustrate its use and present experimental results.
Towards the Verification of Strategic Properties in Multi-Agent Systems with Imperfect Information
Ferrando, Angelo, Malvone, Vadim
In logics for the strategic reasoning the main challenge is represented by their verification in contexts of imperfect information and perfect recall. In this work, we show a technique to approximate the verification of Alternating-time Temporal Logic (ATL*) under imperfect information and perfect recall, which is known to be undecidable. Given a model M and a formula $\varphi$, we propose a verification procedure that generates sub-models of M in which each sub-model M' satisfies a sub-formula $\varphi'$ of $\varphi$ and the verification of $\varphi'$ in M' is decidable. Then, we use CTL* model checking to provide a verification result of $\varphi$ on M. We prove that our procedure is in the same class of complexity of ATL* model checking under perfect information and perfect recall, we present a tool that implements our procedure, and provide experimental results.