Goto

Collaborating Authors

 O'Keeffe, James


Detecting and Diagnosing Faults in Autonomous Robot Swarms with an Artificial Antibody Population Model

arXiv.org Artificial Intelligence

An active approach to fault tolerance is essential for long term autonomy in robots -- particularly multi-robot systems and swarms. Previous efforts have primarily focussed on spontaneously occurring electro-mechanical failures in the sensors and actuators of a minority sub-population of robots. While the systems that enable this function are valuable, they have not yet considered that many failures arise from gradual wear and tear with continued operation, and that this may be more challenging to detect than sudden step changes in performance. This paper presents the Artificial Antibody Population Dynamics (AAPD) model -- an immune-inspired model for the detection and diagnosis of gradual degradation in robot swarms. The AAPD model is demonstrated to reliably detect and diagnose gradual degradation, as well as spontaneous changes in performance, among swarms of robots of as few as 5 robots while remaining tolerant of normally behaving robots. The AAPD model is distributed, offers supervised and unsupervised configurations, and demonstrates promising scalable properties. Deploying the AAPD model on a swarm of foraging robots undergoing slow degradation enables the swarm to operate at an average of ~79\% of its performance in perfect conditions.


Predictive Fault Tolerance for Autonomous Robot Swarms

arXiv.org Artificial Intelligence

Active fault tolerance is essential for robot swarms to retain long-term autonomy. Previous work on swarm fault tolerance focuses on reacting to electro-mechanical faults that are spontaneously injected into robot sensors and actuators. Resolving faults once they have manifested as failures is an inefficient approach, and there are some safety-critical scenarios in which any kind of robot failure is unacceptable. We propose a predictive approach to fault tolerance, based on the principle of preemptive maintenance, in which potential faults are autonomously detected and resolved before they manifest as failures. Our approach is shown to improve swarm performance and prevent robot failure in the cases tested.


Practical Mission Planning for Optimized UAV-Sensor Wireless Recharging

arXiv.org Artificial Intelligence

Optimal maintenance of sensor nodes in a Wireless Rechargeable Sensor Network (WRSN) requires effective scheduling of power delivery vehicles by solving the Charging Scheduling Problem (CSP). Deploying Unmanned Aerial Vehicles (UAVs) as mobile chargers has emerged as a promising solution due to their mobility and flexibility. The CSP can be formulated as a Mixed-Integer Non-Linear Programming problem whose optimization objective is maximizing the recharged energy of sensor nodes within the UAV battery constraint. While many studies have demonstrated satisfactory performance of heuristic algorithms in addressing specific routing problems, few studies explore online updating (i.e., mission re-planning `on the fly') in the CSP context. Here we present a new offline and online mission planner leveraging a first-principles power consumption model that uses real-time state information and environmental information. The planner, namely Rapid Online Metaheuristic-based Planner (ROMP), supplements solutions from a Guided Local Search (GLS) with our Context-aware Black Hole Algorithm. Our results demonstrate that ROMP outperforms GLS in most cases tested. We developed and proposed FastROMP to speed up the online mission (re-)planning algorithm by introducing a new online adjustment operator that uses the latest state information as input, eliminating the need for re-initialization. FastROMP not only provides a better quality route, but it also significantly reduces computational time. The reduction ranges from 39.57% in sparse deployment to 93.3% in denser deployments.


QuaDUE-CCM: Interpretable Distributional Reinforcement Learning using Uncertain Contraction Metrics for Precise Quadrotor Trajectory Tracking

arXiv.org Artificial Intelligence

Accuracy and stability are common requirements for Quadrotor trajectory tracking systems. Designing an accurate and stable tracking controller remains challenging, particularly in unknown and dynamic environments with complex aerodynamic disturbances. We propose a Quantile-approximation-based Distributional-reinforced Uncertainty Estimator (QuaDUE) to accurately identify the effects of aerodynamic disturbances, i.e., the uncertainties between the true and estimated Control Contraction Metrics (CCMs). Taking inspiration from contraction theory and integrating the QuaDUE for uncertainties, our novel CCM-based trajectory tracking framework tracks any feasible reference trajectory precisely whilst guaranteeing exponential convergence. More importantly, the convergence and training acceleration of the distributional RL are guaranteed and analyzed, respectively, from theoretical perspectives. We also demonstrate our system under unknown and diverse aerodynamic forces. Under large aerodynamic forces (>2m/s^2), compared with the classic data-driven approach, our QuaDUE-CCM achieves at least a 56.6% improvement in tracking error. Compared with QuaDRED-MPC, a distributional RL-based approach, QuaDUE-CCM achieves at least a 3 times improvement in contraction rate.


Interpretable Stochastic Model Predictive Control using Distributional Reinforced Estimation for Quadrotor Tracking Systems

arXiv.org Artificial Intelligence

This paper presents a novel trajectory tracker for autonomous quadrotor navigation in dynamic and complex environments. The proposed framework integrates a distributional Reinforcement Learning (RL) estimator for unknown aerodynamic effects into a Stochastic Model Predictive Controller (SMPC) for trajectory tracking. Aerodynamic effects derived from drag forces and moment variations are difficult to model directly and accurately. Most current quadrotor tracking systems therefore treat them as simple `disturbances' in conventional control approaches. We propose Quantile-approximation-based Distributional Reinforced-disturbance-estimator, an aerodynamic disturbance estimator, to accurately identify disturbances, i.e., uncertainties between the true and estimated values of aerodynamic effects. Simplified Affine Disturbance Feedback is employed for control parameterization to guarantee convexity, which we then integrate with a SMPC to achieve sufficient and non-conservative control signals. We demonstrate our system to improve the cumulative tracking errors by at least 66% with unknown and diverse aerodynamic forces compared with recent state-of-the-art. Concerning traditional Reinforcement Learning's non-interpretability, we provide convergence and stability guarantees of Distributional RL and SMPC, respectively, with non-zero mean disturbances.