Fisher, Michael
Autonomy and Safety Assurance in the Early Development of Robotics and Autonomous Systems
Abeywickrama, Dhaminda B., Fisher, Michael, Wheeler, Frederic, Dennis, Louise
This report provides an overview of the workshop titled Autonomy and Safety Assurance in the Early Development of Robotics and Autonomous Systems, hosted by the Centre for Robotic Autonomy in Demanding and Long-Lasting Environments (CRADLE) on September 2, 2024, at The University of Manchester, UK. The event brought together representatives from six regulatory and assurance bodies across diverse sectors to discuss challenges and evidence for ensuring the safety of autonomous and robotic systems, particularly autonomous inspection robots (AIR). The workshop featured six invited talks by the regulatory and assurance bodies. CRADLE aims to make assurance an integral part of engineering reliable, transparent, and trustworthy autonomous systems. Key discussions revolved around three research questions: (i) challenges in assuring safety for AIR; (ii) evidence for safety assurance; and (iii) how assurance cases need to differ for autonomous systems. Following the invited talks, the breakout groups further discussed the research questions using case studies from ground (rail), nuclear, underwater, and drone-based AIR. This workshop offered a valuable opportunity for representatives from industry, academia, and regulatory bodies to discuss challenges related to assured autonomy. Feedback from participants indicated a strong willingness to adopt a design-for-assurance process to ensure that robots are developed and verified to meet regulatory expectations.
ROSMonitoring 2.0: Extending ROS Runtime Verification to Services and Ordered Topics
Saadat, Maryam Ghaffari, Ferrando, Angelo, Dennis, Louise A., Fisher, Michael
Formal verification of robotic applications presents challenges due to their hybrid nature and distributed architecture. This paper introduces ROSMonitoring 2.0, an extension of ROSMonitoring designed to facilitate the monitoring of both topics and services while considering the order in which messages are published and received. The framework has been enhanced to support these novel features for ROS1 -- and partially ROS2 environments -- offering improved real-time support, security, scalability, and interoperability. We discuss the modifications made to accommodate these advancements and present results obtained from a case study involving the runtime monitoring of specific components of a fire-fighting Uncrewed Aerial Vehicle (UAV).
Grand Challenges in the Verification of Autonomous Systems
Leahy, Kevin, Asgari, Hamid, Dennis, Louise A., Feather, Martin S., Fisher, Michael, Ibanez-Guzman, Javier, Logan, Brian, Olszewska, Joanna I., Redfield, Signe
Autonomous systems use independent decision-making with only limited human intervention to accomplish goals in complex and unpredictable environments. As the autonomy technologies that underpin them continue to advance, these systems will find their way into an increasing number of applications in an ever wider range of settings. If we are to deploy them to perform safety-critical or mission-critical roles, it is imperative that we have justified confidence in their safe and correct operation. Verification is the process by which such confidence is established. However, autonomous systems pose challenges to existing verification practices. This paper highlights viewpoints of the Roadmap Working Group of the IEEE Robotics and Automation Society Technical Committee for Verification of Autonomous Systems, identifying these grand challenges, and providing a vision for future research efforts that will be needed to address them.
Specifying Agent Ethics (Blue Sky Ideas)
Dennis, Louise A., Fisher, Michael
We consider the question of what properties a Machine Ethics system should have. This question is complicated by the existence of ethical dilemmas with no agreed upon solution. We provide an example to motivate why we do not believe falling back on the elicitation of values from stakeholders is sufficient to guarantee correctness of such systems. We go on to define two broad categories of ethical property that have arisen in our own work and present a challenge to the community to approach this question in a more systematic way.
Developing Multi-Agent Systems with Degrees of Neuro-Symbolic Integration [A Position Paper]
Dennis, Louise, Farrell, Marie, Fisher, Michael
In this short position paper we highlight our ongoing work on symbolic -- logical, transparent, explainable, verifiable, much verifiable heterogeneous multi-agent systems and, in particular, the slower, may be overwhelmed by data,... complex (and often non-functional) issues that impact the choice of structure within each agent. So, our aim is to capture, in the goal specification G, key aspects that need to be considered/achieved relating to this goal.
Towards Integrating Formal Verification of Autonomous Robots with Battery Prognostics and Health Management
Zhao, Xingyu, Osborne, Matt, Lantair, Jenny, Robu, Valentin, Flynn, David, Huang, Xiaowei, Fisher, Michael, Papacchini, Fabio, Ferrando, Angelo
The battery is a key component of autonomous robots. Its performance limits the robot's safety and reliability. Unlike liquid-fuel, a battery, as a chemical device, exhibits complicated features, including (i) capacity fade over successive recharges and (ii) increasing discharge rate as the state of charge (SOC) goes down for a given power demand. Existing formal verification studies of autonomous robots, when considering energy constraints, formalise the energy component in a generic manner such that the battery features are overlooked. In this paper, we model an unmanned aerial vehicle (UA V) inspection mission on a wind farm and via probabilistic model checking in PRISM show (i) how the battery features may affect the verification results significantly in practical cases; and (ii) how the battery features, together with dynamic environments and battery safety strategies, jointly affect the verification results. Potential solutions to explicitly integrate battery prognostics and health management (PHM) with formal verification of autonomous robots are also discussed to motivate future work. Keywords: Formal verification ยท Probabilistic model checking ยท PRISM ยท Autonomous systems ยท Unmanned aerial vehicle ยท Battery PHM. 1 Introduction Autonomous robots, such as unmanned aerial vehicles (UA V) (commonly termed drones 3), unmanned underwater vehicles (UUV), self-driving cars and legged-robots, obtain increasingly widespread applications in many domains [14].
Formal Specification and Verification of Autonomous Robotic Systems: A Survey
Luckcuck, Matt, Farrell, Marie, Dennis, Louise, Dixon, Clare, Fisher, Michael
An autonomous system is an artificially intelligent entity that makes decisions in response to input, independent of human interaction. Robotic systems are physical entities that interact with the physical world. Thus, we consider an autonomous robotic system as a machine that uses Artificial Intelligence (AI), has a physical presence in and interacts with the real world. They are complex, inherently hybrid, systems, combining both hardware and software; they often require close safety, legal, and ethical consideration. Autonomous robotics are increasingly being used in commonplace-scenarios, such as driverless cars [68], pilotless aircraft [176], and domestic assistants [174, 60]. While for many engineered systems, testing, either through real deployment or via simulation, is deemed sufficient; the unique challenges of autonomous robotics, their dependence on sophisticated software control and decision-making, and their increasing deployment in safety-critical scenarios, require a stronger form of verification. This leads us towards using formal methods, which are mathematically-based techniques for the specification and verification of software systems, to ensure the correctness of, and provide sufficient evidence for the certification of, robotic systems. We contribute an overview and analysis of the state-of-the-art in formal specification and verification of autonomous robotics.
Probabilistic Model Checking of Robots Deployed in Extreme Environments
Zhao, Xingyu, Robu, Valentin, Flynn, David, Dinmohammadi, Fateme, Fisher, Michael, Webster, Matt
Robots are increasingly used to carry out critical missions in extreme environments that are hazardous for humans. This requires a high degree of operational autonomy under uncertain conditions, and poses new challenges for assuring the robot's safety and reliability. In this paper, we develop a framework for probabilistic model checking on a layered Markov model to verify the safety and reliability requirements of such robots, both at pre-mission stage and during runtime. Two novel estimators based on conservative Bayesian inference and imprecise probability model with sets of priors are introduced to learn the unknown transition parameters from operational data. We demonstrate our approach using data from a real-world deployment of unmanned underwater vehicles in extreme environments.
Modular Verification of Vehicle Platooning with Respect to Decisions, Space and Time
Kamali, Maryam, Linker, Sven, Fisher, Michael
The spread of autonomous systems into safety-critical areas has increased the demand for their formal verification, not only due to stronger certification requirements but also to public uncertainty over these new technologies. However, the complex nature of such systems, for example, the intricate combination of discrete and continuous aspects, ensures that whole system verification is often infeasible. This motivates the need for novel analysis approaches that modularise the problem, allowing us to restrict our analysis to one particular aspect of the system while abstracting away from others. For instance, while verifying the real-time properties of an autonomous system we might hide the details of the internal decision-making components. In this paper we describe verification of a range of properties across distinct dimesnions on a practical hybrid agent architecture. This allows us to verify the autonomous decision-making, real-time aspects, and spatial aspects of an autonomous vehicle platooning system. This modular approach also illustrates how both algorithmic and deductive verification techniques can be applied for the analysis of different system subcomponents.
Towards Moral Autonomous Systems
Charisi, Vicky, Dennis, Louise, Fisher, Michael, Lieck, Robert, Matthias, Andreas, Slavkovik, Marija, Sombetzki, Janina, Winfield, Alan F. T., Yampolskiy, Roman
Both the ethics of autonomous systems and the problems of their technical implementation have by now been studied in some detail. Less attention has been given to the areas in which these two separate concerns meet. This paper, written by both philosophers and engineers of autonomous systems, addresses a number of issues in machine ethics that are located at precisely the intersection between ethics and engineering. We first discuss the main challenges which, in our view, machine ethics posses to moral philosophy. We them consider different approaches towards the conceptual design of autonomous systems and their implications on the ethics implementation in such systems. Then we examine problematic areas regarding the specification and verification of ethical behavior in autonomous systems, particularly with a view towards the requirements of future legislation. We discuss transparency and accountability issues that will be crucial for any future wide deployment of autonomous systems in society. Finally we consider the, often overlooked, possibility of intentional misuse of AI systems and the possible dangers arising out of deliberately unethical design, implementation, and use of autonomous robots.