This paper reports on the findings of an ongoing project to investigate techniques to diagnose complex dynamical systems that are modeled as hybrid systems. In particular, we examine continuous systems with embedded supervisory controllers which experience abrupt, partial or full failure of component devices. The problem we address is: given a hybrid model of system behavior, a history of executed controller actions, and a history of observations, including an observation of behavior that is aberrant relative to the model of expected behavior, determine what fault occurred to have caused the aberrant behavior. Determining a diagnosis can be cast as a search problem to find the most likely model for the data. Unfortunately, the search space is extremely large. To reduce search space size and to identify an initial set of candidate diagnoses, we propose to exploit techniques originally applied to qualitative diagnosis of continuous systems. We refine these diagnoses using parameter estimation and model fitting techniques. As a motivating case study, we have examined the problem of diagnosing NASA's Sprint AERCam, a small spherical robotic camera unit with 12 thrusters that enable both linear and rotational motion.
Suppose that a graph is realized from a stochastic block model where one of the blocks is of interest, but many or all of the vertices' block labels are unobserved. The task is to order the vertices with unobserved block labels into a ``nomination list'' such that, with high probability, vertices from the interesting block are concentrated near the list's beginning. We propose several vertex nomination schemes. Our basic - but principled - setting and development yields a best nomination scheme (which is a Bayes-Optimal analogue), and also a likelihood maximization nomination scheme that is practical to implement when there are a thousand vertices, and which is empirically near-optimal when the number of vertices is small enough to allow comparison to the best nomination scheme. We then illustrate the robustness of the likelihood maximization nomination scheme to the modeling challenges inherent in real data, using examples which include a social network involving human trafficking, the Enron Graph, a worm brain connectome and a political blog network.
The development of tools to provide insight into the behavioral response of a civilian population will greatly benefit the modeling and simulation community and have potential applications across multiple user communities in the U.S. Department of Defense. We present an overview of a modular agent-based modeling framework, grounded in the human behavioral and social theory, which is intended to represent a populations’ stance on issues as a function of their changing beliefs, values and interests. We utilize and integrate theories of narrative identity  and planned behavior  with macrosociological theories of heterogeneity and influence  to model civilian behavior in a conflict ecosystem. Communication between agents takes place across a social network developed using real data about the population under consideration, and essential services are implemented as objects within the model allowing for experimentation with different courses of action for development of civil service capacity. We describe the theoretical underpinnings of the model, the current state of implementation, potential use cases, and the path forward for future work.
A key challenge of automated planning, including "safe planning," is the requirement of a domain expert to provide the background knowledge, including some set of safety constraints. To alleviate the infeasibility of acquiring complete and correct knowledge from human experts in many complex, real-world domains, this paper investigates a technique for automated extraction of safety constraints by observing a user demonstration trace. In particular, we describe a new framework based on maximum likelihood learning for generating constraints on the concepts and properties in a domain ontology for a planning domain. Then, we describe a generalization of this framework that involves Bayesian learning of such constraints. To illustrate the advantages of our framework, we provide and discuss examples on a real test application for Airspace Control Order (ACO) planning, a benchmark application in the DARPA Integrated Learning Program.
RUM (Reasoning with Uncertainty Module), is an integrated software tool based on a KEE, a frame system implemented in an object oriented language. RUM's architecture is composed of three layers: representation, inference, and control. The representation layer is based on frame-like data structures that capture the uncertainty information used in the inference layer and the uncertainty meta-information used in the control layer. The inference layer provides a selection of five T-norm based uncertainty calculi with which to perform the intersection, detachment, union, and pooling of information. The control layer uses the meta-information to select the appropriate calculus for each context and to resolve eventual ignorance or conflict in the information. This layer also provides a context mechanism that allows the system to focus on the relevant portion of the knowledge base, and an uncertain-belief revision system that incrementally updates the certainty values of well-formed formulae (wffs) in an acyclic directed deduction graph. RUM has been tested and validated in a sequence of experiments in both naval and aerial situation assessment (SA), consisting of correlating reports and tracks, locating and classifying platforms, and identifying intents and threats. An example of naval situation assessment is illustrated. The testbed environment for developing these experiments has been provided by LOTTA, a symbolic simulator implemented in Flavors. This simulator maintains time-varying situations in a multi-player antagonistic game where players must make decisions in light of uncertain and incomplete data. RUM has been used to assist one of the LOTTA players to perform the SA task.