Goto

Collaborating Authors

 sontag


Formally Verified Physics-Informed Neural Control Lyapunov Functions

Liu, Jun, Fitzsimmons, Maxwell, Zhou, Ruikun, Meng, Yiming

arXiv.org Artificial Intelligence

Control Lyapunov functions are a central tool in the design and analysis of stabilizing controllers for nonlinear systems. Constructing such functions, however, remains a significant challenge. In this paper, we investigate physics-informed learning and formal verification of neural network control Lyapunov functions. These neural networks solve a transformed Hamilton-Jacobi-Bellman equation, augmented by data generated using Pontryagin's maximum principle. Similar to how Zubov's equation characterizes the domain of attraction for autonomous systems, this equation characterizes the null-controllability set of a controlled system. This principled learning of neural network control Lyapunov functions outperforms alternative approaches, such as sum-of-squares and rational control Lyapunov functions, as demonstrated by numerical examples. As an intermediate step, we also present results on the formal verification of quadratic control Lyapunov functions, which, aided by satisfiability modulo theories solvers, can perform surprisingly well compared to more sophisticated approaches and efficiently produce global certificates of null-controllability.


Unifying Controller Design for Stabilizing Nonlinear Systems with Norm-Bounded Control Inputs

Li, Ming, Sun, Zhiyong, Weiland, Siep

arXiv.org Artificial Intelligence

This paper revisits a classical challenge in the design of stabilizing controllers for nonlinear systems with a norm-bounded input constraint. By extending Lin-Sontag's universal formula and introducing a generic (state-dependent) scaling term, a unifying controller design method is proposed. The incorporation of this generic scaling term gives a unified controller and enables the derivation of alternative universal formulas with various favorable properties, which makes it suitable for tailored control designs to meet specific requirements and provides versatility across different control scenarios. Additionally, we present a constructive approach to determine the optimal scaling term, leading to an explicit solution to an optimization problem, named optimization-based universal formula. The resulting controller ensures asymptotic stability, satisfies a norm-bounded input constraint, and optimizes a predefined cost function. Finally, the essential properties of the unified controllers are analyzed, including smoothness, continuity at the origin, stability margin, and inverse optimality. Simulations validate the approach, showcasing its effectiveness in addressing a challenging stabilizing control problem of a nonlinear system.


Characterizing Smooth Safety Filters via the Implicit Function Theorem

Cohen, Max H., Ong, Pio, Bahati, Gilbert, Ames, Aaron D.

arXiv.org Artificial Intelligence

Abstract-- Optimization-based safety filters, such as control barrier function (CBF) based quadratic programs (QPs), have demonstrated success in controlling autonomous systems to achieve complex goals. These CBF-QPs can be shown to be continuous, but are generally not smooth, let alone continuously differentiable. This characterization leads to families of smooth universal formulas for safety-critical controllers that quantify the conservatism of the resulting safety filter, the utility of which is demonstrated through illustrative examples. Over the past decade, control barrier functions (CBFs) [1] have proven to be a powerful tool for designing controllers enforcing safety on nonlinear systems. Most often, such safety filters are instantiated via optimization problems - typically a quadratic program adapt smooth universal formulas for CLFs [12] to CBFs.


PAC bounds of continuous Linear Parameter-Varying systems related to neural ODEs

Rácz, Dániel, Petreczky, Mihály, Daróczy, Bálint

arXiv.org Artificial Intelligence

We consider the problem of learning Neural Ordinary Differential Equations (neural ODEs) within the context of Linear Parameter-Varying (LPV) systems in continuous-time. LPV systems contain bilinear systems which are known to be universal approximators for non-linear systems. Moreover, a large class of neural ODEs can be embedded into LPV systems. As our main contribution we provide Probably Approximately Correct (PAC) bounds under stability for LPV systems related to neural ODEs. The resulting bounds have the advantage that they do not depend on the integration interval.


Learning to Defer with Limited Expert Predictions

Hemmer, Patrick, Thede, Lukas, Vössing, Michael, Jakubik, Johannes, Kühl, Niklas

arXiv.org Artificial Intelligence

Recent research suggests that combining AI models with a human expert can exceed the performance of either alone. The combination of their capabilities is often realized by learning to defer algorithms that enable the AI to learn to decide whether to make a prediction for a particular instance or defer it to the human expert. However, to accurately learn which instances should be deferred to the human expert, a large number of expert predictions that accurately reflect the expert's capabilities are required -- in addition to the ground truth labels needed to train the AI. This requirement shared by many learning to defer algorithms hinders their adoption in scenarios where the responsible expert regularly changes or where acquiring a sufficient number of expert predictions is costly. In this paper, we propose a three-step approach to reduce the number of expert predictions required to train learning to defer algorithms. It encompasses (1) the training of an embedding model with ground truth labels to generate feature representations that serve as a basis for (2) the training of an expertise predictor model to approximate the expert's capabilities. (3) The expertise predictor generates artificial expert predictions for instances not yet labeled by the expert, which are required by the learning to defer algorithms. We evaluate our approach on two public datasets. One with "synthetically" generated human experts and another from the medical domain containing real-world radiologists' predictions. Our experiments show that the approach allows the training of various learning to defer algorithms with a minimal number of human expert predictions. Furthermore, we demonstrate that even a small number of expert predictions per class is sufficient for these algorithms to exceed the performance the AI and the human expert can achieve individually.


AI Assurance using Causal Inference: Application to Public Policy

Svetovidov, Andrei, Rahman, Abdul, Batarseh, Feras A.

arXiv.org Artificial Intelligence

Developing and implementing AI-based solutions help state and federal government agencies, research institutions, and commercial companies enhance decision-making processes, automate chain operations, and reduce the consumption of natural and human resources. At the same time, most AI approaches used in practice can only be represented as "black boxes" and suffer from the lack of transparency. This can eventually lead to unexpected outcomes and undermine trust in such systems. Therefore, it is crucial not only to develop effective and robust AI systems, but to make sure their internal processes are explainable and fair. Our goal in this chapter is to introduce the topic of designing assurance methods for AI systems with high-impact decisions using the example of the technology sector of the US economy. We explain how these fields would benefit from revealing cause-effect relationships between key metrics in the dataset by providing the causal experiment on technology economics dataset. Several causal inference approaches and AI assurance techniques are reviewed and the transformation of the data into a graph-structured dataset is demonstrated.


Beyond Perturbation Stability: LP Recovery Guarantees for MAP Inference on Noisy Stable Instances

Lang, Hunter, Reddy, Aravind, Sontag, David, Vijayaraghavan, Aravindan

arXiv.org Machine Learning

Several works have shown that perturbation stable instances of the MAP inference problem in Potts models can be solved exactly using a natural linear programming (LP) relaxation. However, most of these works give few (or no) guarantees for the LP solutions on instances that do not satisfy the relatively strict perturbation stability definitions. In this work, we go beyond these stability results by showing that the LP approximately recovers the MAP solution of a stable instance even after the instance is corrupted by noise. This "noisy stable" model realistically fits with practical MAP inference problems: we design an algorithm for finding "close" stable instances, and show that several real-world instances from computer vision have nearby instances that are perturbation stable. These results suggest a new theoretical explanation for the excellent performance of this LP relaxation in practice.


Algorithm reduces use of riskier antibiotics for UTIs

#artificialintelligence

One paradox about antibiotics is that, broadly speaking, the more we use them, the less they continue to work. The Darwinian process of bacteria growing resistant to antibiotics means that, when the drugs don't work, we can no longer treat infections, leading to groups like the World Health Organization warning about our ability to control major public health threats. Because of its ubiquity, one topic that's particularly concerning is urinary tract infections (UTIs), which affect half of all women and add almost $4 billion a year in unnecessary health-care costs. Doctors often treat UTIs using antibiotics called fluoroquinolones that are inexpensive and generally effective. However, they have also been found to put women at risk of becoming infected with other difficult-to-treat bacteria, such as C. difficile and certain species of Staphylococcus, and also to increase their risk of tendon injuries and life-threatening conditions like aortic tears. As a result of this, medical associations have issued guidelines recommending fluoroquinolones as "second-line treatments" that should only be used on a patient when other antibiotics are ineffective or have adverse reactions.


MIT Researchers Develop AI System That Can Defer To Human

#artificialintelligence

The human-AI hybrid model performed eight percent better than either the human or AI could on their own. Researchers at MIT have developed an artificial intelligence system that is able to understand when to defer a task to an expert, adapting to the collaborator's availability and level of expertise. A lot of AI systems use this collaborative approach, in which an automated service works in most cases, while a human is brought in for edge problems. Facebook's content moderation platform runs like this, using image and language recognition systems to automatically filter inappropriate content, while a large team of human moderators deal with more challenging content. According to the team, the human-AI hybrid model performed eight percent better than either the human or AI could on their own. It is also able to reduce the computational cost and train the AI platform with fewer data samples, saving businesses time and money.


An automated health care system that understands when to step in

#artificialintelligence

In recent years, entire industries have popped up that rely on the delicate interplay between human workers and automated software. Companies like Facebook work to keep hateful and violent content off their platforms using a combination of automated filtering and human moderators. In the medical field, researchers at MIT and elsewhere have used machine learning to help radiologists better detect different forms of cancer. What can be tricky about these hybrid approaches is understanding when to rely on the expertise of people versus programs. This isn't always merely a question of who does a task "better"; indeed, if a person has limited bandwidth, the system may have to be trained to minimize how often it asks for help.