Smart Information Flow Technologies, LLC
Active Perception for Cyber Intrusion Detection and Defense
Benton, J. (Smart Information Flow Technologies, LLC) | Goldman, Robert P. (Smart Information Flow Technologies, LLC) | Burstein, Mark (Smart information Flow Technologies, LLC) | Mueller, Joseph (Smart information Flow Technologies, LLC) | Robertson, Paul (DOLL Labs) | Cerys, Dan (DOLL Labs) | Hoffman, Andreas (DOLL Labs) | Bobrow, Rusty (Bobrow Computational Intelligence, LLC)
Most modern network-based intrusion detection systems (IDSs) passively monitor network traffic to identify possible attacks through known vectors. Though useful, this approach has widely known high false positive rates, often causing administrators to suffer from a "cry wolf effect," where they ignore all warnings because so many have been false. In this paper, we focus on a method to reduce this effect using an idea borrowed from computer vision and neuroscience called active perception. Our approach is informed by theoretical ideas from decision theory and recent research results in neuroscience. The active perception agent allocates computational and sensing resources to (approximately) optimize its Value of Information. To do this, it draws on models to direct sensors towards phenomena of greatest interest to inform decisions about cyber defense actions. By identifying critical network assets, the organization's mission measures self-interest (and value of information). This model enables the system to follow leads from inexpensive, inaccurate alerts with targeted use of expensive, accurate sensors. This allows the deployment of sensors to build structured interpretations of situations. From these, an organization can meet mission-centered decision-making requirements with calibrated responses proportional to the likelihood of true detection and degree of threat.
Improving Trust Estimates in Planning Domains with Rare Failure Events
Potts, Colin M. (Lawrence University) | Krebsbach, Kurt (Lawrence University) | Thayer, Jordan (Smart Information Flow Technologies, LLC) | Musliner, Dave (Smart Information Flow Technologies, LLC)
In many planning domains, it is impossible to construct plans that are guaranteed to keep the system completely safe. A common approach is to build probabilistic plans that are guaranteed to maintain system with a sufficiently high probability. For many such domains, bounds on system safety cannot be computed analytically, but instead rely on execution sampling coupled with a plan verification techniques. While probabilistic planning with verification can work well, it is not adequate in situations in which some modes of failure are very rare, simply because too many execution traces must be sampled (e.g., 1012) to ensure that the rare events of interest will occur even once. The P-CIRCA planner seeks to solve planning problems while probabilistically guaranteeing safety. Our domains frequently involve verifying that the probability of failure is below a low threshold (< 0.01). Because the events we sample have such low probabilities, we use Importance sampling (IS) (Hammersley and Handscomb 1964; Clarke and Zuliani 2011) to reduce the number of samples required. However, since we deal with an abstracted model, we cannot bias all paths individually. This prevents IS from achieving a correct bias. To compensate for this drawback we present a concept of DAGification to partially expand our representation and achieve a better bias.