Goto

Collaborating Authors

 faineko


Falsification-Driven Reinforcement Learning for Maritime Motion Planning

Müller, Marlon, Finkeldei, Florian, Krasowski, Hanna, Arcak, Murat, Althoff, Matthias

arXiv.org Artificial Intelligence

Compliance with maritime traffic rules is essential for the safe operation of autonomous vessels, yet training reinforcement learning (RL) agents to adhere to them is challenging. The behavior of RL agents is shaped by the training scenarios they encounter, but creating scenarios that capture the complexity of maritime navigation is non-trivial, and real-world data alone is insufficient. To address this, we propose a falsification-driven RL approach that generates adversarial training scenarios in which the vessel under test violates maritime traffic rules, which are expressed as signal temporal logic specifications. Our experiments on open-sea navigation with two vessels demonstrate that the proposed approach provides more relevant training scenarios and achieves more consistent rule compliance.


CBFKIT: A Control Barrier Function Toolbox for Robotics Applications

Black, Mitchell, Fainekos, Georgios, Hoxha, Bardh, Okamoto, Hideki, Prokhorov, Danil

arXiv.org Artificial Intelligence

This paper introduces CBFKit, a Python/ROS toolbox for safe robotics planning and control under uncertainty. The toolbox provides a general framework for designing control barrier functions for mobility systems within both deterministic and stochastic environments. It can be connected to the ROS open-source robotics middleware, allowing for the setup of multi-robot applications, encoding of environments and maps, and integrations with predictive motion planning algorithms. Additionally, it offers multiple CBF variations and algorithms for robot control. The CBFKit is demonstrated on the Toyota Human Support Robot (HSR) in both simulation and in physical experiments.


A Survey of Algorithms for Black-Box Safety Validation of Cyber-Physical Systems

Corso, Anthony | Moss, Robert (Stanford University) | Koren, Mark (Stanford University) | Lee, Ritchie (NASA Ames) | Kochenderfer, Mykel (Stanford University)

Journal of Artificial Intelligence Research

Autonomous cyber-physical systems (CPS) can improve safety and efficiency for safety-critical applications, but require rigorous testing before deployment. The complexity of these systems often precludes the use of formal verification and real-world testing can be too dangerous during development. Therefore, simulation-based techniques have been developed that treat the system under test as a black box operating in a simulated environment. Safety validation tasks include finding disturbances in the environment that cause the system to fail (falsification), finding the most-likely failure, and estimating the probability that the system fails. Motivated by the prevalence of safety-critical artificial intelligence, this work provides a survey of state-of-the-art safety validation techniques for CPS with a focus on applied algorithms and their modifications for the safety validation problem. We present and discuss algorithms in the domains of optimization, path planning, reinforcement learning, and importance sampling. Problem decomposition techniques are presented to help scale algorithms to large state spaces, which are common for CPS. A brief overview of safety-critical applications is given, including autonomous vehicles and aircraft collision avoidance systems. Finally, we present a survey of existing academic and commercially available safety validation tools.


Simulation-based Adversarial Test Generation for Autonomous Vehicles with Machine Learning Components

Tuncali, Cumhur Erkan, Fainekos, Georgios, Ito, Hisahiro, Kapinski, James

arXiv.org Artificial Intelligence

Many organizations are developing autonomous driving systems, which are expected to be deployed at a large scale in the near future. Despite this, there is a lack of agreement on appropriate methods to test, debug, and certify the performance of these systems. One of the main challenges is that many autonomous driving systems have machine learning components, such as deep neural networks, for which formal properties are difficult to characterize. We present a testing framework that is compatible with test case generation and automatic falsification methods, which are used to evaluate cyber-physical systems. We demonstrate how the framework can be used to evaluate closed-loop properties of an autonomous driving system model that includes the ML components, all within a virtual environment. We demonstrate how to use test case generation methods, such as covering arrays, as well as requirement falsification methods to automatically identify problematic test scenarios. The resulting framework can be used to increase the reliability of autonomous driving systems.


Planning in Dynamic Environments Through Temporal Logic Monitoring

Hoxha, Bardh (Arizona State University) | Fainekos, Georgios (Arizona State University)

AAAI Conferences

We present a framework that enables online planning for robotic systems in dynamic environments. The PLANrm framework presented in this work utilizes the theory of robustness and monitoring of Metric Temporal Logic (MTL) specifications to inspect and modify available plans to both avoid obstacles and satisfy specifications in a dynamic environment. The use of MTL allows the practitioner to set complex event and timing based specifications that need to be satisfied in the execution of the plan. The monitoring algorithm inspects the possible paths in a bounded window and selects and adjusts a path to satisfy the specifications. In this paper, we present initial results on the framework and an extended summary of the algorithmic results. The approach is illustrated using a running example of a car-like model with a number of MTL specifications.