Collaborating Authors

Robu, Valentin

Symbiotic System Design for Safe and Resilient Autonomous Robotics in Offshore Wind Farms Artificial Intelligence

To reduce Operation and Maintenance (O&M) costs on offshore wind farms, wherein 80% of the O&M cost relates to deploying personnel, the offshore wind sector looks to robotics and Artificial Intelligence (AI) for solutions. Barriers to Beyond Visual Line of Sight (BVLOS) robotics include operational safety compliance and resilience, inhibiting the commercialization of autonomous services offshore. To address safety and resilience challenges we propose a symbiotic system; reflecting the lifecycle learning and co-evolution with knowledge sharing for mutual gain of robotic platforms and remote human operators. Our methodology enables the run-time verification of safety, reliability and resilience during autonomous missions. We synchronize digital models of the robot, environment and infrastructure and integrate front-end analytics and bidirectional communication for autonomous adaptive mission planning and situation reporting to a remote operator. A reliability ontology for the deployed robot, based on our holistic hierarchical-relational model, supports computationally efficient platform data analysis. We analyze the mission status and diagnostics of critical sub-systems within the robot to provide automatic updates to our run-time reliability ontology, enabling faults to be translated into failure modes for decision making during the mission. We demonstrate an asset inspection mission within a confined space and employ millimeter-wave sensing to enhance situational awareness to detect the presence of obscured personnel to mitigate risk. Our results demonstrate a symbiotic system provides an enhanced resilience capability to BVLOS missions. A symbiotic system addresses the operational challenges and reprioritization of autonomous mission objectives. This advances the technology required to achieve fully trustworthy autonomous systems.

BayLIME: Bayesian Local Interpretable Model-Agnostic Explanations Artificial Intelligence

A key impediment to the use of AI is the lacking of transparency, especially in safety/security critical applications. The black-box nature of AI systems prevents humans from direct explanations on how the AI makes predictions, which stimulated Explainable AI (XAI) -- a research field that aims at improving the trust and transparency of AI systems. In this paper, we introduce a novel XAI technique, BayLIME, which is a Bayesian modification of the widely used XAI approach LIME. BayLIME exploits prior knowledge to improve the consistency in repeated explanations of a single prediction and also the robustness to kernel settings. Both theoretical analysis and extensive experiments are conducted to support our conclusions.

Assessing Safety-Critical Systems from Operational Testing: A Study on Autonomous Vehicles Artificial Intelligence

Context: Demonstrating high reliability and safety for safety-critical systems (SCSs) remains a hard problem. Diverse evidence needs to be combined in a rigorous way: in particular, results of operational testing with other evidence from design and verification. Growing use of machine learning in SCSs, by precluding most established methods for gaining assurance, makes operational testing even more important for supporting safety and reliability claims. Objective: We use Autonomous Vehicles (AVs) as a current example to revisit the problem of demonstrating high reliability. AVs are making their debut on public roads: methods for assessing whether an AV is safe enough are urgently needed. We demonstrate how to answer 5 questions that would arise in assessing an AV type, starting with those proposed by a highly-cited study. Method: We apply new theorems extending Conservative Bayesian Inference (CBI), which exploit the rigour of Bayesian methods while reducing the risk of involuntary misuse associated with now-common applications of Bayesian inference; we define additional conditions needed for applying these methods to AVs. Results: Prior knowledge can bring substantial advantages if the AV design allows strong expectations of safety before road testing. We also show how naive attempts at conservative assessment may lead to over-optimism instead; why extrapolating the trend of disengagements is not suitable for safety claims; use of knowledge that an AV has moved to a less stressful environment. Conclusion: While some reliability targets will remain too high to be practically verifiable, CBI removes a major source of doubt: it allows use of prior knowledge without inducing dangerously optimistic biases. For certain ranges of required reliability and prior beliefs, CBI thus supports feasible, sound arguments. Useful conservative claims can be derived from limited prior knowledge.

A Safety Framework for Critical Systems Utilising Deep Neural Networks Artificial Intelligence

Increasingly sophisticated mathematical modelling processes from Machine Learning are being used to analyse complex data. However, the performance and explainability of these models within practical critical systems requires a rigorous and continuous verification of their safe utilisation. Working towards addressing this challenge, this paper presents a principled novel safety argument framework for critical systems that utilise deep neural networks. The approach allows various forms of predictions, e.g., future reliability of passing some demands, or confidence on a required reliability level. It is supported by a Bayesian analysis using operational data and the recent verification and validation techniques for deep learning. The prediction is conservative -- it starts with partial prior knowledge obtained from lifecycle activities and then determines the worst-case prediction. Open challenges are also identified.

Consider ethical and social challenges in smart grid research Artificial Intelligence

Artificial Intelligence and Machine Learning are increasingly seen as key technologies for buildin g more decentralised and resilient energy grids, but researchers must consider the ethical and social implications of their use E nergy grids are undergoing rapid changes, requiring new ways both to process the large amounts of data generated from the power system, but also - increasingly - to take smart operational decisions [1]. On the data side, the UK and most EU countries have committed to a target of offering a smart meter to every home by 2020 [ 2 ], with similar monitoring being installed in other parts of the energy network. This has led to some to refer to a "data tsunami", requiri ng development of new machine learning techniques to deal with the e nsuing challenge of extracting useful information from this data - often in real time. Another trend is the use of AI techniques (such as those from multi - agent systems, computational gam e theory and decision making under uncertainty) to take autonomous allocation and control decisions. This is driven increasingly by the moves towards more decentralised energy systems, where prosumers (consumers with own micro - generation and storage) can g enerate and source their own electricity through peer - to - peer (P2P) trading in local energy markets and community energy schemes.

Towards Integrating Formal Verification of Autonomous Robots with Battery Prognostics and Health Management Artificial Intelligence

The battery is a key component of autonomous robots. Its performance limits the robot's safety and reliability. Unlike liquid-fuel, a battery, as a chemical device, exhibits complicated features, including (i) capacity fade over successive recharges and (ii) increasing discharge rate as the state of charge (SOC) goes down for a given power demand. Existing formal verification studies of autonomous robots, when considering energy constraints, formalise the energy component in a generic manner such that the battery features are overlooked. In this paper, we model an unmanned aerial vehicle (UA V) inspection mission on a wind farm and via probabilistic model checking in PRISM show (i) how the battery features may affect the verification results significantly in practical cases; and (ii) how the battery features, together with dynamic environments and battery safety strategies, jointly affect the verification results. Potential solutions to explicitly integrate battery prognostics and health management (PHM) with formal verification of autonomous robots are also discussed to motivate future work. Keywords: Formal verification · Probabilistic model checking · PRISM · Autonomous systems · Unmanned aerial vehicle · Battery PHM. 1 Introduction Autonomous robots, such as unmanned aerial vehicles (UA V) (commonly termed drones 3), unmanned underwater vehicles (UUV), self-driving cars and legged-robots, obtain increasingly widespread applications in many domains [14].

Assessing the Safety and Reliability of Autonomous Vehicles from Road Testing Artificial Intelligence

There is an urgent societal need to assess whether autonomous vehicles (AVs) are safe enough. From published quantitative safety and reliability assessments of AVs, we know that, given the goal of predicting very low rates of accidents, road testing alone requires infeasible numbers of miles to be driven. However, previous analyses do not consider any knowledge prior to road testing - knowledge which could bring substantial advantages if the AV design allows strong expectations of safety before road testing. We present the advantages of a new variant of Conservative Bayesian Inference (CBI), which uses prior knowledge while avoiding optimistic biases. We then study the trend of disengagements (take-overs by human drivers) by applying Software Reliability Growth Models (SRGMs) to data from Waymo's public road testing over 51 months, in view of the practice of software updates during this testing. Our approach is to not trust any specific SRGM, but to assess forecast accuracy and then improve forecasts. We show that, coupled with accuracy assessment and recalibration techniques, SRGMs could be a valuable test planning aid.

Probabilistic Model Checking of Robots Deployed in Extreme Environments Artificial Intelligence

Robots are increasingly used to carry out critical missions in extreme environments that are hazardous for humans. This requires a high degree of operational autonomy under uncertain conditions, and poses new challenges for assuring the robot's safety and reliability. In this paper, we develop a framework for probabilistic model checking on a layered Markov model to verify the safety and reliability requirements of such robots, both at pre-mission stage and during runtime. Two novel estimators based on conservative Bayesian inference and imprecise probability model with sets of priors are introduced to learn the unknown transition parameters from operational data. We demonstrate our approach using data from a real-world deployment of unmanned underwater vehicles in extreme environments.

Efficient Buyer Groups for Prediction-of-Use Electricity Tariffs

AAAI Conferences

Current electricity tariffs do not reflect the real cost that customers incur to suppliers, as units are charged at the same rate, regardless of how predictable each customer's consumption is. A recent proposal to address this problem are prediction-of-use tariffs. In such tariffs, a customer is asked in advance to predict her future consumption, and is charged based both on her actual consumption and the deviation from her prediction. Prior work {aamas2014} studied the cost game induced by a single such tariff, and showed customers would have an incentive to minimize their risk, by joining together when buying electricity as a grand coalition. In this work we study the efficient (i.e. cost-minimizing) structure of buying groups for the more realistic setting when multiple, competing prediction-of-use tariffs are available. We propose a polynomial time algorithm to compute efficient buyer groups, and validate our approach experimentally, using a large-scale data set of domestic electricity consumers in the UK.

Cooperative Virtual Power Plant Formation Using Scoring Rules

AAAI Conferences

Virtual Power Plants (VPPs) are fast emerging as a suitable means of integrating small and distributed energy resources (DERs), like wind and solar, into the electricity supply network (Grid). VPPs are formed via the aggregation of a large number of such DERs, so that they exhibit the characteristics of a traditional generator in terms of predictability and robustness. In this work, we promote the formation of such "cooperative'' VPPs (CVPPs) using multi-agent technology. In particular, we design a payment mechanism that encourages DERs to join CVPPs with large overall production. Our method is based on strictly proper scoring rules and incentivises the provision of accurate predictions from the CVPPs---and in turn, the member DERs---which aids in the planning of the supply schedule at the Grid. We empirically evaluate our approach using the real-world setting of 16 commercial wind farms in the UK. We show that our mechanism incentivises real DERs to form CVPPs, and outperforms the current state of the art payment mechanism developed for this problem.