The process of diagnosis involves learning about the state of a system from various observations of symptoms or findings about the system. Sophisticated Bayesian (and other) algorithms have been developed to revise and maintain beliefs about the system as observations are made. Nonetheless, diagnostic models have tended to ignore some common sense reasoning exploited by human diagnosticians; In particular, one can learn from which observations have not been made, in the spirit of conversational implicature. There are two concepts that we describe to extract information from the observations not made. First, some symptoms, if present, are more likely to be reported before others. Second, most human diagnosticians and expert systems are economical in their data-gathering, searching first where they are more likely to find symptoms present. Thus, there is a desirable bias toward reporting symptoms that are present. We develop a simple model for these concepts that can significantly improve diagnostic inference.
RUM (Reasoning with Uncertainty Module), is an integrated software tool based on a KEE, a frame system implemented in an object oriented language. RUM's architecture is composed of three layers: representation, inference, and control. The representation layer is based on frame-like data structures that capture the uncertainty information used in the inference layer and the uncertainty meta-information used in the control layer. The inference layer provides a selection of five T-norm based uncertainty calculi with which to perform the intersection, detachment, union, and pooling of information. The control layer uses the meta-information to select the appropriate calculus for each context and to resolve eventual ignorance or conflict in the information. This layer also provides a context mechanism that allows the system to focus on the relevant portion of the knowledge base, and an uncertain-belief revision system that incrementally updates the certainty values of well-formed formulae (wffs) in an acyclic directed deduction graph. RUM has been tested and validated in a sequence of experiments in both naval and aerial situation assessment (SA), consisting of correlating reports and tracks, locating and classifying platforms, and identifying intents and threats. An example of naval situation assessment is illustrated. The testbed environment for developing these experiments has been provided by LOTTA, a symbolic simulator implemented in Flavors. This simulator maintains time-varying situations in a multi-player antagonistic game where players must make decisions in light of uncertain and incomplete data. RUM has been used to assist one of the LOTTA players to perform the SA task.
The development of tools to provide insight into the behavioral response of a civilian population will greatly benefit the modeling and simulation community and have potential applications across multiple user communities in the U.S. Department of Defense. We present an overview of a modular agent-based modeling framework, grounded in the human behavioral and social theory, which is intended to represent a populations’ stance on issues as a function of their changing beliefs, values and interests. We utilize and integrate theories of narrative identity  and planned behavior  with macrosociological theories of heterogeneity and influence  to model civilian behavior in a conflict ecosystem. Communication between agents takes place across a social network developed using real data about the population under consideration, and essential services are implemented as objects within the model allowing for experimentation with different courses of action for development of civil service capacity. We describe the theoretical underpinnings of the model, the current state of implementation, potential use cases, and the path forward for future work.
A key challenge of automated planning, including "safe planning," is the requirement of a domain expert to provide the background knowledge, including some set of safety constraints. To alleviate the infeasibility of acquiring complete and correct knowledge from human experts in many complex, real-world domains, this paper investigates a technique for automated extraction of safety constraints by observing a user demonstration trace. In particular, we describe a new framework based on maximum likelihood learning for generating constraints on the concepts and properties in a domain ontology for a planning domain. Then, we describe a generalization of this framework that involves Bayesian learning of such constraints. To illustrate the advantages of our framework, we provide and discuss examples on a real test application for Airspace Control Order (ACO) planning, a benchmark application in the DARPA Integrated Learning Program.