Holzapfel, Florian
Runway Sign Classifier: A DAL C Certifiable Machine Learning System
Dmitriev, Konstantin, Schumann, Johann, Bostanov, Islam, Abdelhamid, Mostafa, Holzapfel, Florian
In recent years, the remarkable progress of Machine Learning (ML) technologies within the domain of Artificial Intelligence (AI) systems has presented unprecedented opportunities for the aviation industry, paving the way for further advancements in automation, including the potential for single pilot or fully autonomous operation of large commercial airplanes. However, ML technology faces major incompatibilities with existing airborne certification standards, such as ML model traceability and explainability issues or the inadequacy of traditional coverage metrics. Certification of ML-based airborne systems using current standards is problematic due to these challenges. This paper presents a case study of an airborne system utilizing a Deep Neural Network (DNN) for airport sign detection and classification. Building upon our previous work, which demonstrates compliance with Design Assurance Level (DAL) D, we upgrade the system to meet the more stringent requirements of Design Assurance Level C. To achieve DAL C, we employ an established architectural mitigation technique involving two redundant and dissimilar Deep Neural Networks. The application of novel ML-specific data management techniques further enhances this approach. This work is intended to illustrate how the certification challenges of ML-based systems can be addressed for medium criticality airborne applications.
Trusting Learning Based Adaptive Flight Control Algorithms
Mühlegg, Maximilian (Technische Universität München) | Holzapfel, Florian (Technische Universität München) | Chowdhary, Girish (Oklahoma State University)
Autonomous unmanned aerial systems (UAS) are envisioned to become increasingly utilized in commercial airspace. In order to be attractive for commercial applications, UAS are required to undergo a quick development cycle, ensure cost effectiveness and work reliably in changing environments. Learning based adaptive control systems have been proposed to meet these demands. These techniques promise more flexibility when compared with traditional linear control techniques. However, no consistent verification and validation (V&V) framework exists for adaptive controllers. The underlying purpose of the V&V processes in certifying control algorithms for aircraft is to build trust in a safety critical system. In the past, most adaptive control algorithms were solely designed to ensure stability of a model system and meet robustness requirements against selective uncertainties and disturbances. However, these assessments do not guarantee reliable performance of the real system required by the V&V process. The question arises how trust can be defined for learning based adaptive control algorithms. From our perspective, self-confidence of an adaptive flight controller will be an integral part of building trust in the system. The notion of self-confidence in the adaptive control context relates to the estimate of the adaptive controller in its capabilities to operate reliably, and its ability to foresee the need for taking action before undesired behaviors lead to a loss of the system. In this paper we present a pathway to a possible answer to the question of how self-confidence for adaptive controllers can be achieved. In particular, we elaborate how algorithms for diagnosis and prognosis can be integrated to help in this process.