Goto

Collaborating Authors

AI researchers devise failure detection method for safety-critical machine learning

#artificialintelligence

Researchers from MIT, Stanford University, and the University of Pennsylvania have devised a method for predicting failure rates of safety-critical machine learning systems and efficiently determining their rate of occurrence. Safety-critical machine learning systems make decisions for automated technology like self-driving cars, robotic surgery, pacemakers, and autonomous flight systems for helicopters and planes. Unlike AI that helps you write an email or recommends a song, safety-critical system failures can result in serious injury or death. Problems with such machine learning systems can also cause financially costly events like SpaceX missing its landing pad. Researchers say their neural bridge sampling method gives regulators, academics, and industry experts a common reference for discussing the risks associated with deploying complex machine learning systems in safety-critical environments. In a paper titled "Neural Bridge Sampling for Evaluating Safety-Critical Autonomous Systems," recently published on arXiv, the authors assert their approach can satisfy both the public's right to know that a system has been rigorously tested and an organization's desire to treat AI models like trade secrets.


Our Future Lies in Making AI Robust and Verifiable - War on the Rocks

#artificialintelligence

This article was submitted in response to the call for ideas issued by the co-chairs of the National Security Commission on Artificial Intelligence, Eric Schmidt and Robert Work. It addresses the first question (part b.), which asks what might happen if the United States fails to develop robust AI capabilities that address national security issues. It also responds to question five (part d.), which asks what measures should the government take to ensure AI systems for national security are trusted. We are hurtling towards a future in which AI is omnipresent -- Siris will turn our iPhones into personal assistants and Alexas will automate our homes and provide companionship to our elderly. Digital ad engines will feed our deepest retail dreams, and drones will deliver them to us in record time.


Predicting Model Failure using Saliency Maps in Autonomous Driving Systems

arXiv.org Machine Learning

While machine learning systems show high success rate in many complex tasks, research shows they can also fail in very unexpected situations. Rise of machine learning products in safety-critical industries cause an increase in attention in evaluating model robustness and estimating failure probability in machine learning systems. In this work, we propose a design to train a student model -- a failure predictor -- to predict the main model's error for input instances based on their saliency map. We implement and review the preliminary results of our failure predictor model on an autonomous vehicle steering control system as an example of safety-critical applications.


Open Problems in Engineering and Quality Assurance of Safety Critical Machine Learning Systems

arXiv.org Machine Learning

Fatal accidents are a major issue hindering the wide acceptance of safety-critical systems using machine-learning and deep-learning models, such as automated-driving vehicles. Quality assurance frameworks are required for such machine learning systems, but there are no widely accepted and established quality-assurance concepts and techniques. At the same time, open problems and the relevant technical fields are not organized. To establish standard quality assurance frameworks, it is necessary to visualize and organize these open problems in an interdisciplinary way, so that the experts from many different technical fields may discuss these problems in depth and develop solutions. In the present study, we identify, classify, and explore the open problems in quality assurance of safety-critical machine-learning systems, and their relevant corresponding industry and technological trends, using automated-driving vehicles as an example. Our results show that addressing these open problems requires incorporating knowledge from several different technological and industrial fields, including the automobile industry, statistics, software engineering, and machine learning.


Testing the blind spots in artificial intelligence

#artificialintelligence

Deep-trained artificial intelligence models can make mistakes if they encounter scenes that they do not recognize, such as an object in an orientation, color, lighting or weather (like in the example above) that conflicts with the datasets used to train the model. By investigating the robustness of deep learning models using a context-based approach, KAUST researchers have developed a means to predict situations in which artificial intelligence might fail. Artificial intelligence (AI) is becoming increasingly common as a technology that helps automated systems make better and more adaptive decisions. AI is an algorithm that allows a system to learn from its environment and available inputs. In advanced applications, such as self-driving cars, AI is trained using an approach called deep learning, which relies solely on large volumes of sensor data without human involvement.