Goto

Collaborating Authors

Results


Agility Prime Researches Electronic Parachute Powered by Machine Learning - Aviation Today

#artificialintelligence

Kentucky-based Aviation Safety Resources is developing ballistic parachutes for use in aircraft ranging from 60 lbs to 12,000 lbs. The Air Force's Agility Prime program awarded a phase I small business technology transfer (STTR) research contract to Jump Aero and Caltech to create an electronic parachute powered by machine learning that would allow the pilot to recalibrate the flight controller in midair in the event of damage, the company announced on April 7. "The electronic parachute is the name for the concept of implementing an adaptive/machine-learned control routine that would be impractical to certify for the traditional controller for use only in an emergency recovery mode -- something that would be switched on by the pilot if there is reason to believe that the baseline flight controller is not properly controlling the aircraft (if, for example, the aircraft has been damaged in midair)," Carl Dietrich, founder and president of Jump Aero Incorporated, told Avionics International. This technology was previously difficult to certify because of the need for deterministic proof of safety within these complex systems. The research was sparked when the Federal Aviation Administration certified an autonomous landing function for use in emergency situations which created a path for the possible certification of electronic parachute technology, according to Jump Aero. The machine-learned neural network can be trained with non-linear behaviors that occur in an aircraft in the presence of substantial failures such those generated by a bird strike, Dietrich said.


DO-178C certifiable software to help integrate machine learning into avionics - Military Embedded Systems

#artificialintelligence

Intelligent Artifacts announced a new partnership with ConsuNova, a global provider of avionics systems certification services, to develop a software Cert-kit for airborne aerospace solutions. The new technology is intended to bolster machine learning in an avionics system by enabling the end user to understand why a decision was ultimately made by the software. The company claims that the operator will be able trace a prediction back through analytic records to better understand and record the system's decision-making process. According to officials, development of this Cert-kit will allow Intelligent Artifacts to offer plug-and-play capabilities to Aerospace companies looking to integrate proven certifiable machine learning into full solution stacks needing to be Federal Aviation Administration (FAA) certified for deployment on aircraft. The work itself will include meeting design and code requirements, passing safety assessments, and providing complete documentation on Intelligent Artifacts' software solution.


Yann LeCun Team Uses Dictionary Learning To Peek Into Transformers' Black Boxes

#artificialintelligence

Transformer architectures have become the building blocks for many state-of-the-art natural language processing (NLP) models. While transformers are certainly powerful, researchers' understanding of how they actually work remains limited. This is problematic due to the lack of transparency and the possibility of biases being inherited via training data and algorithms, which could cause models to produce unfair or incorrect predictions. In the paper Transformer Visualization via Dictionary Learning: Contextualized Embedding as a Linear Superposition of Transformer Factors, a Yann LeCun team from Facebook AI Research, UC Berkeley and New York University leverages dictionary learning techniques to provide detailed visualizations of transformer representations and insights into the semantic structures -- such as word-level disambiguation, sentence-level pattern formation, and long-range dependencies -- that are captured by transformers. Previous attempts to visualize and analyze this "black box" issue in transformers include direct visualization and, more recently, "probing tasks" designed to interpret transformer models.


Explaining Black-Box Algorithms Using Probabilistic Contrastive Counterfactuals

arXiv.org Artificial Intelligence

There has been a recent resurgence of interest in explainable artificial intelligence (XAI) that aims to reduce the opaqueness of AI-based decision-making systems, allowing humans to scrutinize and trust them. Prior work in this context has focused on the attribution of responsibility for an algorithm's decisions to its inputs wherein responsibility is typically approached as a purely associational concept. In this paper, we propose a principled causality-based approach for explaining black-box decision-making systems that addresses limitations of existing methods in XAI. At the core of our framework lies probabilistic contrastive counterfactuals, a concept that can be traced back to philosophical, cognitive, and social foundations of theories on how humans generate and select explanations. We show how such counterfactuals can quantify the direct and indirect influences of a variable on decisions made by an algorithm, and provide actionable recourse for individuals negatively affected by the algorithm's decision. Unlike prior work, our system, LEWIS: (1)can compute provably effective explanations and recourse at local, global and contextual levels (2)is designed to work with users with varying levels of background knowledge of the underlying causal model and (3)makes no assumptions about the internals of an algorithmic system except for the availability of its input-output data. We empirically evaluate LEWIS on three real-world datasets and show that it generates human-understandable explanations that improve upon state-of-the-art approaches in XAI, including the popular LIME and SHAP. Experiments on synthetic data further demonstrate the correctness of LEWIS's explanations and the scalability of its recourse algorithm.


White Paper Machine Learning in Certified Systems

arXiv.org Artificial Intelligence

Machine Learning (ML) seems to be one of the most promising solution to automate partially or completely some of the complex tasks currently realized by humans, such as driving vehicles, recognizing voice, etc. It is also an opportunity to implement and embed new capabilities out of the reach of classical implementation techniques. However, ML techniques introduce new potential risks. Therefore, they have only been applied in systems where their benefits are considered worth the increase of risk. In practice, ML techniques raise multiple challenges that could prevent their use in systems submitted to certification constraints. But what are the actual challenges? Can they be overcome by selecting appropriate ML techniques, or by adopting new engineering or certification practices? These are some of the questions addressed by the ML Certification 3 Workgroup (WG) set-up by the Institut de Recherche Technologique Saint Exup\'ery de Toulouse (IRT), as part of the DEEL Project.


A new interpretable unsupervised anomaly detection method based on residual explanation

arXiv.org Artificial Intelligence

Despite the superior performance in modeling complex patterns to address challenging problems, the black-box nature of Deep Learning (DL) methods impose limitations to their application in real-world critical domains. The lack of a smooth manner for enabling human reasoning about the black-box decisions hinder any preventive action to unexpected events, in which may lead to catastrophic consequences. To tackle the unclearness from black-box models, interpretability became a fundamental requirement in DL-based systems, leveraging trust and knowledge by providing ways to understand the model's behavior. Although a current hot topic, further advances are still needed to overcome the existing limitations of the current interpretability methods in unsupervised DL-based models for Anomaly Detection (AD). Autoencoders (AE) are the core of unsupervised DL-based for AD applications, achieving best-in-class performance. However, due to their hybrid aspect to obtain the results (by requiring additional calculations out of network), only agnostic interpretable methods can be applied to AE-based AD. These agnostic methods are computationally expensive to process a large number of parameters. In this paper we present the RXP (Residual eXPlainer), a new interpretability method to deal with the limitations for AE-based AD in large-scale systems. It stands out for its implementation simplicity, low computational cost and deterministic behavior, in which explanations are obtained through the deviation analysis of reconstructed input features. In an experiment using data from a real heavy-haul railway line, the proposed method achieved superior performance compared to SHAP, demonstrating its potential to support decision making in large scale critical systems.


Multi-Class Multiple Instance Learning for Predicting Precursors to Aviation Safety Events

arXiv.org Machine Learning

In recent years, there has been a rapid growth in the application of machine learning techniques that leverage aviation data collected from commercial airline operations to improve safety. Anomaly detection and predictive maintenance have been the main targets for machine learning applications. However, this paper focuses on the identification of precursors, which is a relatively newer application. Precursors are events correlated with adverse events that happen prior to the adverse event itself. Therefore, precursor mining provides many benefits including understanding the reasons behind a safety incident and the ability to identify signatures, which can be tracked throughout a flight to alert the operators of the potential for an adverse event in the future. This work proposes using the multiple-instance learning (MIL) framework, a weakly supervised learning task, combined with carefully designed binary classifier leveraging a Multi-Head Convolutional Neural Network-Recurrent Neural Network (MHCNN-RNN) architecture. Multi-class classifiers are then created and compared, enabling the prediction of different adverse events for any given flight by combining binary classifiers, and by modifying the MHCNN-RNN to handle multiple outputs. Results obtained showed that the multiple binary classifiers perform better and are able to accurately forecast high speed and high path angle events during the approach phase. Multiple binary classifiers are also capable of determining the aircraft's parameters that are correlated to these events. The identified parameters can be considered precursors to the events and may be studied/tracked further to prevent these events in the future.


Meta Learning Black-Box Population-Based Optimizers

arXiv.org Artificial Intelligence

The no free lunch theorem states that no model is better suited to every problem. A question that arises from this is how to design methods that propose optimizers tailored to specific problems achieving state-of-the-art performance. This paper addresses this issue by proposing the use of meta-learning to infer population-based black-box optimizers that can automatically adapt to specific classes of problems. We suggest a general modeling of population-based algorithms that result in Learning-to-Optimize POMDP (LTO-POMDP), a meta-learning framework based on a specific partially observable Markov decision process (POMDP). From that framework's formulation, we propose to parameterize the algorithm using deep recurrent neural networks and use a meta-loss function based on stochastic algorithms' performance to train efficient data-driven optimizers over several related optimization tasks. The learned optimizers' performance based on this implementation is assessed on various black-box optimization tasks and hyperparameter tuning of machine learning models. Our results revealed that the meta-loss function encourages a learned algorithm to alter its search behavior so that it can easily fit into a new context. Thus, it allows better generalization and higher sample efficiency than state-of-the-art generic optimization algorithms, such as the Covariance matrix adaptation evolution strategy (CMA-ES).


Helicopter Track Identification with Autoencoder

arXiv.org Artificial Intelligence

Computing power, big data, and advancement of algorithms have led to a renewed interest in artificial intelligence (AI), especially in deep learning (DL). The success of DL largely lies on data representation because different representations can indicate to a degree the different explanatory factors of variation behind the data. In the last few year, the most successful story in DL is supervised learning. However, to apply supervised learning, one challenge is that data labels are expensive to get, noisy, or only partially available. With consideration that we human beings learn in an unsupervised way; self-supervised learning methods have garnered a lot of attention recently. A dominant force in self-supervised learning is the autoencoder, which has multiple uses (e.g., data representation, anomaly detection, denoise). This research explored the application of an autoencoder to learn effective data representation of helicopter flight track data, and then to support helicopter track identification. Our testing results are promising. For example, at Phoenix Deer Valley (DVT) airport, where 70% of recorded flight tracks have missing aircraft types, the autoencoder can help to identify twenty-two times more helicopters than otherwise detectable using rule-based methods; for Grand Canyon West Airport (1G4) airport, the autoencoder can identify thirteen times more helicopters than a current rule-based approach. Our approach can also identify mislabeled aircraft types in the flight track data and find true types for records with pseudo aircraft type labels such as HELO. With improved labelling, studies using these data sets can produce more reliable results.


Reasons, Values, Stakeholders: A Philosophical Framework for Explainable Artificial Intelligence

arXiv.org Artificial Intelligence

The societal and ethical implications of the use of opaque artificial intelligence systems for consequential decisions, such as welfare allocation and criminal justice, have generated a lively debate among multiple stakeholder groups, including computer scientists, ethicists, social scientists, policy makers, and end users. However, the lack of a common language or a multi-dimensional framework to appropriately bridge the technical, epistemic, and normative aspects of this debate prevents the discussion from being as productive as it could be. Drawing on the philosophical literature on the nature and value of explanations, this paper offers a multi-faceted framework that brings more conceptual precision to the present debate by (1) identifying the types of explanations that are most pertinent to artificial intelligence predictions, (2) recognizing the relevance and importance of social and ethical values for the evaluation of these explanations, and (3) demonstrating the importance of these explanations for incorporating a diversified approach to improving the design of truthful algorithmic ecosystems. The proposed philosophical framework thus lays the groundwork for establishing a pertinent connection between the technical and ethical aspects of artificial intelligence systems.