Goto

Collaborating Authors

 Ferrando, Angelo


RV4Chatbot: Are Chatbots Allowed to Dream of Electric Sheep?

arXiv.org Artificial Intelligence

Chatbots have become integral to various application domains, including those with safety-critical considerations. As a result, there is a pressing need for methods that ensure chatbots consistently adhere to expected, safe behaviours. In this paper, we introduce RV4Chatbot, a Runtime Verification framework designed to monitor deviations in chatbot behaviour. We formalise expected behaviours as interaction protocols between the user and the chatbot. We present the RV4Chatbot design and describe two implementations that instantiate it: RV4Rasa, for monitoring chatbots created with the Rasa framework, and RV4Dialogflow, for monitoring Dialogflow chatbots. Additionally, we detail experiments conducted in a factory automation scenario using both RV4Rasa and RV4Dialogflow.


ROSMonitoring 2.0: Extending ROS Runtime Verification to Services and Ordered Topics

arXiv.org Artificial Intelligence

Formal verification of robotic applications presents challenges due to their hybrid nature and distributed architecture. This paper introduces ROSMonitoring 2.0, an extension of ROSMonitoring designed to facilitate the monitoring of both topics and services while considering the order in which messages are published and received. The framework has been enhanced to support these novel features for ROS1 -- and partially ROS2 environments -- offering improved real-time support, security, scalability, and interoperability. We discuss the modifications made to accommodate these advancements and present results obtained from a case study involving the runtime monitoring of specific components of a fire-fighting Uncrewed Aerial Vehicle (UAV).


Open Challenges in the Formal Verification of Autonomous Driving

arXiv.org Artificial Intelligence

In the realm of autonomous driving, the development and integration of highly complex and heterogeneous systems are standard practice. Modern vehicles are not monolithic systems; instead, they are composed of diverse hardware components, each running its own software systems. An autonomous vehicle comprises numerous independent components, often developed by different and potentially competing companies. This diversity poses significant challenges for the certification process, as it necessitates certifying components that may not disclose their internal behaviour (black-boxes). In this paper, we present a real-world case study of an autonomous driving system, identify key open challenges associated with its development and integration, and explore how formal verification techniques can address these challenges to ensure system reliability and safety.


3vLTL: A Tool to Generate Automata for Three-valued LTL

arXiv.org Artificial Intelligence

Multi-valued logics have a long tradition in the literature on system verification, including run-time verification. However, comparatively fewer model-checking tools have been developed for multi-valued specification languages. We present 3vLTL, a tool to generate Buchi automata from formulas in Linear-time Temporal Logic (LTL) interpreted on a three-valued semantics. Given an LTL formula, a set of atomic propositions as the alphabet for the automaton, and a truth value, our procedure generates a Buchi automaton that accepts all the words that assign the chosen truth value to the LTL formula. Given the particular type of the output of the tool, it can also be seamlessly processed by third-party libraries in a natural way. That is, the Buchi automaton can then be used in the context of formal verification to check whether an LTL formula is true, false, or undefined on a given model.


Scalable Verification of Strategy Logic through Three-valued Abstraction

arXiv.org Artificial Intelligence

The model checking problem for multi-agent systems against Strategy Logic specifications is known to be non-elementary. On this logic several fragments have been defined to tackle this issue but at the expense of expressiveness. In this paper, we propose a three-valued semantics for Strategy Logic upon which we define an abstraction method. We show that the latter semantics is an approximation of the classic two-valued one for Strategy Logic. Furthermore, we extend MCMAS, an open-source model checker for multi-agent specifications, to incorporate our abstraction method and present some promising experimental results.


Proceedings of the Third Workshop on Agents and Robots for reliable Engineered Autonomy

arXiv.org Artificial Intelligence

The volume comprises the proceedings of the Third Workshop on Agents and Robots for reliable Engineered Autonomy (AREA 2023), held alongside the 26th European Conference on Artificial Intelligence (ECAI 2023). It explores the convergence of autonomous agents and robotics, emphasizing the practical application of agents in real-world scenarios with physical interactions. The workshop highlights the growing importance of enhanced autonomy and reliable behavior in robotic systems and the need for novel verification and validation methods. Its primary objective is to promote collaboration between researchers in these fields, aiming to address complex challenges in autonomous robotic systems. The volume includes 7 full papers and 5 short papers.


Towards the Combination of Model Checking and Runtime Verification on Multi-Agent Systems

arXiv.org Artificial Intelligence

Intelligent systems, such as Multi-Agent Systems (MAS), can be seen as a set of intelligent entities capable of proactively decide how to act to fulfill their own goals. These entities, called generally agents, are notoriously autonomous, i.e., they do not expect input from an user to act, and social, i.e., they usually communicate amongst each other to achieve common goals. Software systems are not easy to trust in general. This is especially true in the case of complex and distributed systems, such as MAS. Because of this, we need verification techniques to verify that such systems behave as expected. More specifically, in the case of MAS, it is relevant to know whether the agents are capable of achieving their own goals, by themselves or by collaborating with other agents by forming a coalition. This is usually referred to as the process of finding a strategy for the agent(s). A well-known formalism for reasoning about strategic behaviours in MAS is Alternating-time Temporal Logic (AT L) [1]. Before verifying AT L specifications, two questions need to be answered: (i) does each agent know everything about the system?


Towards the Verification of Strategic Properties in Multi-Agent Systems with Imperfect Information

arXiv.org Artificial Intelligence

In logics for the strategic reasoning the main challenge is represented by their verification in contexts of imperfect information and perfect recall. In this work, we show a technique to approximate the verification of Alternating-time Temporal Logic (ATL*) under imperfect information and perfect recall, which is known to be undecidable. Given a model M and a formula $\varphi$, we propose a verification procedure that generates sub-models of M in which each sub-model M' satisfies a sub-formula $\varphi'$ of $\varphi$ and the verification of $\varphi'$ in M' is decidable. Then, we use CTL* model checking to provide a verification result of $\varphi$ on M. We prove that our procedure is in the same class of complexity of ATL* model checking under perfect information and perfect recall, we present a tool that implements our procedure, and provide experimental results.


Towards Integrating Formal Verification of Autonomous Robots with Battery Prognostics and Health Management

arXiv.org Artificial Intelligence

The battery is a key component of autonomous robots. Its performance limits the robot's safety and reliability. Unlike liquid-fuel, a battery, as a chemical device, exhibits complicated features, including (i) capacity fade over successive recharges and (ii) increasing discharge rate as the state of charge (SOC) goes down for a given power demand. Existing formal verification studies of autonomous robots, when considering energy constraints, formalise the energy component in a generic manner such that the battery features are overlooked. In this paper, we model an unmanned aerial vehicle (UA V) inspection mission on a wind farm and via probabilistic model checking in PRISM show (i) how the battery features may affect the verification results significantly in practical cases; and (ii) how the battery features, together with dynamic environments and battery safety strategies, jointly affect the verification results. Potential solutions to explicitly integrate battery prognostics and health management (PHM) with formal verification of autonomous robots are also discussed to motivate future work. Keywords: Formal verification · Probabilistic model checking · PRISM · Autonomous systems · Unmanned aerial vehicle · Battery PHM. 1 Introduction Autonomous robots, such as unmanned aerial vehicles (UA V) (commonly termed drones 3), unmanned underwater vehicles (UUV), self-driving cars and legged-robots, obtain increasingly widespread applications in many domains [14].