Laddaga, Robert


Reports of the Workshops of the Thirty-First AAAI Conference on Artificial Intelligence

AI Magazine

Reports of the Workshops of the Thirty-First AAAI Conference on Artificial Intelligence


Qualitative Reasoning about Cyber Intrusions

AAAI Conferences

In this paper we discuss work performed in an ambitious DARPA funded cyber security effort. The broad approach taken by the project was for the network to be self-aware and to self-adapt in order to dodge attacks. In critical systems, it is not always the best or practical thing, to shut down the network under attack. The paper describes the qualitative trust modeling and diagnosis system that maintains a model of trust for networked resources using a combination of two basic ideas: Conditional trust (based on conditional preference (CP-Nets) and the principle of maximum entropy (PME)). We describe Monte-Carlo simulations of using adaptive security based on our trust model. The results of the simulations show the trade-off, under ideal conditions, between additional resource provisioning and attack mitigation.


AWDRAT: A Cognitive Middleware System for Information Survivability

AI Magazine

The infrastructure of modern society is controlled by software systems that are vulnerable to attacks. Many such attacks, launched by "recreational hackers" have already led to severe disruptions and significant cost. It, therefore, is critical that we find ways to protect such systems and to enable them to continue functioning even after a successful attack. This article describes AWDRAT, a prototype middleware system for providing survivability to both new and legacy applications. AWDRAT stands for architectural differencing, wrappers, diagnosis, recovery, adaptive software, and trust modeling. AWDRAT uses these techniques to gain visibility into the execution of an application system and to compare the application's actual behavior to that which is expected. In the case of a deviation, AWDRAT conducts a diagnosis that determines which computational resources are likely to have been compromised and then adds these assessments to its trust model. The trust model in turn guides the recovery process, particularly by guiding the system in its choice among functionally equivalent methods and resources.AWDRAT has been applied to and evaluated on an example application system, a graphical editor for constructing mission plans. We describe a series of experiments that were performed to test the effectiveness of AWDRAT in recognizing and recovering from simulated attacks, and we present data showing the effectiveness of AWDRAT in detecting a variety of compromises to the application system (approximately 90 percent of all simulated attacks are detected, diagnosed, and corrected). We also summarize some lessons learned from the AWDRAT experiments and suggest approaches for comprehensive application protection methods and techniques.