Goto

Collaborating Authors

Trusting Learning Based Adaptive Flight Control Algorithms

AAAI Conferences

Autonomous unmanned aerial systems (UAS) are envisioned to become increasingly utilized in commercial airspace. In order to be attractive for commercial applications, UAS are required to undergo a quick development cycle, ensure cost effectiveness and work reliably in changing environments. Learning based adaptive control systems have been proposed to meet these demands. These techniques promise more flexibility when compared with traditional linear control techniques. However, no consistent verification and validation (V&V) framework exists for adaptive controllers. The underlying purpose of the V&V processes in certifying control algorithms for aircraft is to build trust in a safety critical system. In the past, most adaptive control algorithms were solely designed to ensure stability of a model system and meet robustness requirements against selective uncertainties and disturbances. However, these assessments do not guarantee reliable performance of the real system required by the V&V process. The question arises how trust can be defined for learning based adaptive control algorithms. From our perspective, self-confidence of an adaptive flight controller will be an integral part of building trust in the system. The notion of self-confidence in the adaptive control context relates to the estimate of the adaptive controller in its capabilities to operate reliably, and its ability to foresee the need for taking action before undesired behaviors lead to a loss of the system. In this paper we present a pathway to a possible answer to the question of how self-confidence for adaptive controllers can be achieved. In particular, we elaborate how algorithms for diagnosis and prognosis can be integrated to help in this process.


Verification for Machine Learning, Autonomy, and Neural Networks Survey

arXiv.org Artificial Intelligence

This survey presents an overview of verification techniques for autonomous systems, with a focus on safety-critical autonomous cyber-physical systems (CPS) and subcomponents thereof. Autonomy in CPS is enabling by recent advances in artificial intelligence (AI) and machine learning (ML) through approaches such as deep neural networks (DNNs), embedded in so-called learning enabled components (LECs) that accomplish tasks from classification to control. Recently, the formal methods and formal verification community has developed methods to characterize behaviors in these LECs with eventual goals of formally verifying specifications for LECs, and this article presents a survey of many of these recent approaches.


A Bayesian Neural Network based on Dropout Regulation

arXiv.org Artificial Intelligence

Bayesian Neural Networks (BNN) have recently emerged in the Deep Learning world for dealing with uncertainty estimation in classification tasks, and are used in many application domains such as astrophysics, autonomous driving...BNN assume a prior over the weights of a neural network instead of point estimates, enabling in this way the estimation of both aleatoric and epistemic uncertainty of the model prediction.Moreover, a particular type of BNN, namely MC Dropout, assumes a Bernoulli distribution on the weights by using Dropout.Several attempts to optimize the dropout rate exist, e.g. using a variational approach.In this paper, we present a new method called "Dropout Regulation" (DR), which consists of automatically adjusting the dropout rate during training using a controller as used in automation.DR allows for a precise estimation of the uncertainty which is comparable to the state-of-the-art while remaining simple to implement.


Stochastic processes and feedback-linearisation for online identification and Bayesian adaptive control of fully-actuated mechanical systems

arXiv.org Machine Learning

This work proposes a new method for simultaneous probabilistic identification and control of an observable, fully-actuated mechanical system. Identification is achieved by conditioning stochastic process priors on observations of configurations and noisy estimates of configuration derivatives. In contrast to previous work that has used stochastic processes for identification, we leverage the structural knowledge afforded by Lagrangian mechanics and learn the drift and control input matrix functions of the control-affine system separately. We utilise feedback-linearisation to reduce, in expectation, the uncertain nonlinear control problem to one that is easy to regulate in a desired manner. Thereby, our method combines the flexibility of nonparametric Bayesian learning with epistemological guarantees on the expected closed-loop trajectory. We illustrate our method in the context of torque-actuated pendula where the dynamics are learned with a combination of normal and log-normal processes.


Real-time Uncertainty Decomposition for Online Learning Control

arXiv.org Artificial Intelligence

Safety-critical decisions based on machine learning models require a clear understanding of the involved uncertainties to avoid hazardous or risky situations. While aleatoric uncertainty can be explicitly modeled given a parametric description, epistemic uncertainty rather describes the presence or absence of training data. This paper proposes a novel generic method for modeling epistemic uncertainty and shows its advantages over existing approaches for neural networks on various data sets. It can be directly combined with aleatoric uncertainty estimates and allows for prediction in real-time as the inference is sample-free. We exploit this property in a model-based quadcopter control setting and demonstrate how the controller benefits from a differentiation between aleatoric and epistemic uncertainty in online learning of thermal disturbances.