SIFT
Qualitative Reasoning about Cyber Intrusions
Robertson, Paul (DOLL Inc.) | Laddaga, Robert (Vanderbilt University) | Goldman, Robert (SIFT) | Burstein, Mark (SIFT) | Cerys, Daniel (DOLL Inc.)
In this paper we discuss work performed in an ambitious DARPA funded cyber security effort. The broad approach taken by the project was for the network to be self-aware and to self-adapt in order to dodge attacks. In critical systems, it is not always the best or practical thing, to shut down the network under attack. The paper describes the qualitative trust modeling and diagnosis system that maintains a model of trust for networked resources using a combination of two basic ideas: Conditional trust (based on conditional preference (CP-Nets) and the principle of maximum entropy (PME)). We describe Monte-Carlo simulations of using adaptive security based on our trust model. The results of the simulations show the trade-off, under ideal conditions, between additional resource provisioning and attack mitigation.
Qualitative Reasoning: Everyday, Pervasive, and Moving Forward — A Report on QR-15
Friedman, Scott (SIFT) | Lockwood, Ann Kate (University of St. Thomas)
When human experts build qualitative or quantitative models of complex systems, they use the function of the system as a guideline to decide what to model and how to model it, yet they do not often encode this functional knowledge directly. If qualitative and quantitative models contained this functional knowledge, our reasoning systems might use it as a heuristic or as a filter during the course of quantitative and qualitative simulation. Matthew Klenk (PARC) delivered a separate talk related to massive-scale model-based reasoning, describing the challenge of choosing initial conditions for simulation. Throughout the technical presentations on advances in qualitative simulation, we discussed the practicality of automatically transforming quantitative and qualitative models during the course of reasoning.
Computational Mechanisms to Support Reporting of Self Confidence of Automated/Autonomous Systems
Kuter, Ugur (SIFT) | Miller, Chris (SIFT)
This paper describes a new candidate method of computing autonomous "self confidence." We describe how to analyze a plan for possible but unexpected break down cases and how to adapt the plan to circumvent those conditions. We view the result plan as more stable than the original one. The ability of achieving such plan stability is the core of how we propose to compute a system’s self confidence in its decisions and plans. This paper summarizes this approach and presents a preliminary evaluation that shows our approach is promising.