Well File:

 MIT


Chance-constrained Static Schedules for Temporally Probabilistic Plans

Journal of Artificial Intelligence Research

Time management under uncertainty is essential to large scale projects. From space exploration to industrial production, there is a need to schedule and perform activities. given complex specifications on timing. In order to generate schedules that are robust to uncertainty in the duration of activities, prior work has focused on a problem framing that uses an interval-bounded uncertainty representation. However, such approaches are unable to take advantage of known probability distributions over duration. In this paper we concentrate on a probabilistic formulation of temporal problems with uncertain duration, called the probabilistic simple temporal problem. As distributions often have an unbounded range of outcomes, we consider chance-constrained solutions, with guarantees on the probability of meeting temporal constraints. By considering distributions over uncertain duration, we are able to use risk as a resource, reason over the relative likelihood of outcomes, and derive higher utility solutions. We first demonstrate our approach by encoding the problem as a convex program. We then develop a more efficient hybrid algorithm whose parent solver generates risk allocations and whose child solver generates schedules for a particular risk allocation. The child is made efficient by leveraging existing interval-bounded scheduling algorithms, while the parent is made efficient by extracting conflicts over risk allocations. We perform numerical experiments to show the advantages of reasoning over probabilistic uncertainty, by comparing the utility of schedules generated with risk allocation against those generated from reasoning over bounded uncertainty. We also empirically show that solution time is greatly reduced by incorporating conflict-directed risk allocation.


Motion Planning Under Uncertainty with Complex Agents and Environments via Hybrid Search

Journal of Artificial Intelligence Research

As autonomous systems and robots are applied to more real world situations, they must reason about uncertainty when planning actions. Mission success oftentimes cannot be guaranteed and the planner must reason about the probability of failure. Unfortunately, computing a trajectory that satisfies mission goals while constraining the probability of failure is difficult because of the need to reason about complex, multidimensional probability distributions. Recent methods have seen success using chance-constrained, model-based planning. However, the majority of these methods can only handle simple environment and agent models. We argue that there are two main drawbacks of current approaches to goal-directed motion planning under uncertainty. First, current methods suffer from an inability to deal with expressive environment models such as 3D non-convex obstacles. Second, most planners rely on considerable simplifications when computing trajectory risk including approximating the agentโ€™s dynamics, geometry, and uncertainty. In this article, we apply hybrid search to the risk-bound, goal-directed planning problem. The hybrid search consists of a region planner and a trajectory planner. The region planner makes discrete choices by reasoning about geometric regions that the autonomous agent should visit in order to accomplish its mission. In formulating the region planner, we propose landmark regions that help produce obstacle-free paths. The region planner passes paths through the environment to a trajectory planner; the task of the trajectory planner is to optimize trajectories that respect the agentโ€™s dynamics and the userโ€™s desired risk of mission failure. We discuss three approaches to modeling trajectory risk: a CDF-based approach, a sampling-based collocation method, and an algorithm named Shooting Method Monte Carlo. These models allow computation of trajectory risk with more complex environments, agent dynamics, geometries, and models of uncertainty than past approaches. A variety of 2D and 3D test cases are presented including a linear case, a Dubins car model, and an underwater autonomous vehicle. The method is shown to outperform other methods in terms of speed and utility of the solution. Additionally, the models of trajectory risk are shown to better approximate risk in simulation.


Learning Nonlinear Dynamics in Efficient, Balanced Spiking Networks Using Local Plasticity Rules

AAAI Conferences

The brain uses spikes in neural circuits to perform many dynamical computations. The computations are performed with properties such as spiking efficiency, i.e. minimal number of spikes, and robustness to noise. A major obstacle for learning computations in artificial spiking neural networks with such desired biological properties is due to lack of our understanding of how biological spiking neural networks learn computations. Here, we consider the credit assignment problem, i.e. determining the local contribution of each synapse to the network's global output error, for learning nonlinear dynamical computations in a spiking network with the desired properties of biological networks. We approach this problem by fusing the theory of efficient, balanced neural networks (EBN) with nonlinear adaptive control theory to propose a local learning rule. Locality of learning rules are ensured by feeding back into the network its own error, resulting in a learning rule depending solely on presynaptic inputs and error feedbacks. The spiking efficiency and robustness of the network are guaranteed by maintaining a tight excitatory/inhibitory balance, ensuring that each spike represents a local projection of the global output error and minimizes a loss function. The resulting networks can learn to implement complex dynamics with very small numbers of neurons and spikes, exhibit the same spike train variability as observed experimentally, and are extremely robust to noise and neuronal loss.


Learning the Probability of Activation in the Presence of Latent Spreaders

AAAI Conferences

When an infection spreads in a community, an individual's probability of becoming infected depends on both her susceptibility and exposure to the contagion through contact with others. While one often has knowledge regarding an individual's susceptibility, in many cases, whether or not an individual's contacts are contagious is unknown.We study the problem of predicting if an individual will adopt a contagion in the presence of multiple modes of infection (exposure/susceptibility) and latent neighbor influence. We present a generative probabilistic model and a variational inference method to learn the parameters of our model. Through a series of experiments on synthetic data, we measure the ability of the proposed model to identify latent spreaders, and predict the risk of infection. Applied to a real dataset of 20,000 hospital patients, we demonstrate the utility of our model in predicting the onset of a healthcare associated infection using patient room-sharing and nurse-sharing networks. Our model outperforms existing benchmarks and provides actionable insights for the design and implementation of targeted interventions to curb the spread of infection.


Streaming Non-Monotone Submodular Maximization: Personalized Video Summarization on the Fly

AAAI Conferences

The need for real time analysis of rapidly producing data streams (e.g., video and image streams) motivated the design of streaming algorithms that can efficiently extract and summarize useful information from massive data "on the fly." Such problems can often be reduced to maximizing a submodular set function subject to various constraints. While efficient streaming methods have been recently developed for monotone submodular maximization, in a wide range of applications, such as video summarization, the underlying utility function is non-monotone, and there are often various constraints imposed on the optimization problem to consider privacy or personalization. We develop the first efficient single pass streaming algorithm, Streaming Local Search, that for any streaming monotone submodular maximization algorithm with approximation guarantee ฮฑ under a collection of independence systems I, provides a constant 1/(1+2/โˆšฮฑ+1/ฮฑ+2d(1+โˆšฮฑ)) approximation guarantee for maximizing a non-monotone submodular function under the intersection of I and d knapsack constraints. Our experiments show that for video summarization, our method runs more than 1700 times faster than previous work, while maintaining practically the same performance.


Co-Domain Embedding Using Deep Quadruplet Networks for Unseen Traffic Sign Recognition

AAAI Conferences

Recent advances in the field of computer vision have provided Thus, our approach is based on the following hypotheses: highly cost-effective solutions for developing advanced driver 1) the existence of a co-embedding space for synthetic assistance systems (ADAS) for automobiles. Furthermore, and real data, and 2) the existence of an embedding space computer vision components are becoming indispensable where real data is condensed around a synthetic anchor for to improve safety and to achieve AI in the form of fully each class. We illustrate the idea in Figure 1. Taking these into automated, self-driving cars. This is mostly by virtue of the account, we learn two nonlinear mappings using a neural success of deep learning, which is regarded to be due to the network. The first involves mapping for a real sample into presence of large-scale supervised data, proper computation an embedding space, and the second involves mapping of a power and algorithmic advances (Goodfellow, Bengio, and synthetic anchor onto the same metric space.


Towards Formal Definitions of Blameworthiness, Intention, and Moral Responsibility

AAAI Conferences

We provide formal definitions of degree of blameworthiness and intention relative to an epistemic state (a probability over causal models and a utility function on outcomes). These, together with a definition of actual causality, provide the key ingredients for moral responsibility judgments. We show that these definitions give insight into commonsense intuitions in a variety of puzzling cases from the literature.


Semi-Supervised Biomedical Translation With Cycle Wasserstein Regression GANs

AAAI Conferences

The biomedical field offers many learning tasks that share unique challenges: large amounts of unpaired data, and a high cost to generate labels. In this work, we develop a method to address these issues with semi-supervised learning in regression tasks (e.g., translation from source to target). Our model uses adversarial signals to learn from unpaired datapoints, and imposes a cycle-loss reconstruction error penalty to regularize mappings in either direction against one another. We first evaluate our method on synthetic experiments, demonstrating two primary advantages of the system: 1) distribution matching via the adversarial loss and 2) regularization towards invertible mappings via the cycle loss. We then show a regularization effect and improved performance when paired data is supplemented by additional unpaired data on two real biomedical regression tasks: estimating the physiological effect of medical treatments, and extrapolating gene expression (transcriptomics) signals. Our proposed technique is a promising initial step towards more robust use of adversarial signals in semi-supervised regression, and could be useful for other tasks (e.g., causal inference or modality translation) in the biomedical field.


Reasonableness Monitors

AAAI Conferences

As we move towards autonomous machines responsible for making decisions previously entrusted to humans, there is an immediate need for machines to be able to explain their behavior and defend the reasonableness of their actions. To implement this vision, each part of a machine should be aware of the behavior of the other parts that they cooperate with. Each part must be able to explain the observed behavior of those neighbors in the context of the shared goal for the local community. If such an explanation cannot be made, it is evidence that either a part has failed (or was subverted) or the communication has failed. The development of reasonableness monitors is work towards generalizing that vision, with the intention of developing a system-construction methodology that enhances both robustness and security, at runtime (not static compile time), by dynamic checking and explaining of the behaviors of parts and subsystems for reasonableness in context.


Chance-Constrained Probabilistic Simple Temporal Problems

AAAI Conferences

Scheduling under uncertainty is essential to many autonomous systems and logistics tasks. Probabilistic methods for solving temporal problems exist which quantify and attempt to minimize the probability of schedule failure. These methods are overly conservative, resulting in a loss in schedule utility. Chance constrained formalism address over-conservatism by imposing bounds on risk, while maximizing utility subject to these risk bounds. In this paper we present the probabilistic Simple Temporal Network (pSTN), a probabilistic formalism for representing temporal problems with bounded risk and a utility over event timing. We introduce a constrained optimisation algorithm for pSTNs that achieves compactness and efficiency through a problem encoding in terms of a parameterised STNU and its reformulation as a parameterised STN. We demonstrate through a car sharing application that our chance-constrained approach runs in the same time as the previous probabilistic approach, yields solutions with utility improvements of at least 5% over previous arts, while guaranteeing operation within the specified risk bound.