Collaborating Authors

Artificial intelligence (AI) and cognitive computing: what, why and where


Although artificial intelligence (as a set of technologies, not in the sense of mimicking human intelligence) is here since a long time in many forms and ways, it's a term that quite some people, certainly IT vendors, don't like to use that much anymore – but artificial intelligence is real, for your business too. Instead of talking about artificial intelligence (AI) many describe the current wave of AI innovation and acceleration with – admittedly somewhat differently positioned – terms and concepts such as cognitive computing or focus on several real-life applications of artificial intelligence that often start with words such as "smart" (omni-present in anything related to the IoT as well), "intelligent", "predictive" and, indeed, "cognitive", depending on the exact application – and vendor. Despite the term issues, artificial intelligence is essential for and in, among others, information management, healthcare, life sciences, data analysis, digital transformation, security (cybersecurity and others), various consumer applications, next gen smart building technologies, FinTech, predictive maintenance, robotics and so much more. On top of that, AI is added to several other technologies, including IoT and big, as well as, small data analytics. There are many reasons why several vendors doubt using the term artificial intelligence for AI solutions/innovations and often package them in another term (trust us, we've been there). Artificial intelligence (AI) is a term that has somewhat of a negative connotation in general perception but also in the perception of technology leaders and firms.

The artificial reality of cyber defence


Attacks are getting more complex. This is especially true when it comes to cyberwar, so much so that government sponsored attacks have been bolstered by research investments that approach military proportions. Just look at the recent report published by the US State Department, which said that strategies for stopping cyber attacks need to be fundamentally reconsidered in light of complex cyber threats posed by rival states. In order to detect and stop these attacks, innovation is required. I say that because anomaly detection based on traditional correlation rules often results in too many false positives and events that can reasonably be manually reviewed.

An Anomaly Contribution Explainer for Cyber-Security Applications Machine Learning

--In this paper we introduce Anomaly Contribution Explainer or ACE, a tool to explain security anomaly detection models in terms of the model features through a regression framework, and its variant, ACE-KL, which highlights the important anomaly contributors. ACE and ACE-KL provide insights in diagnosing which attributes significantly contribute to an anomaly by building a specialized linear model to locally approximate the anomaly score that a black-box model generates. We conducted experiments with these anomaly detection models to detect security anomalies on both synthetic data and real data. In particular, we evaluate performance on three public data sets: CERT insider threat, netflow logs, and Android malware. The experimental results are encouraging: our methods consistently identify the correct contributing feature in the synthetic data where ground truth is available; similarly, for real data sets, our methods point a security analyst in the direction of the underlying causes of an anomaly, including in one case leading to the discovery of previously overlooked network scanning activity. We have made our source code publicly available. Cyber-security is a key concern for both private and public organizations, given the high cost of security compromises and attacks; malicious cyber-activity cost the U.S. economy between $57 billion and $109 billion in 2016 [1]. As a result, spending on security research and development, and security products and services to detect and combat cyber-attacks has been increasing [2]. Organizations produce large amounts of network, host and application data that can be used to gain insights into cyber-security threats, misconfigurations, and network operations. While security domain experts can manually sift through some amount of data to spot attacks and understand them, it is virtually impossible to do so at scale, considering that even a medium sized enterprise can produce terabytes of data in a few hours.

Cyber Anomaly Detection Using Graph-node Role-dynamics Machine Learning

Intrusion detection systems (IDSs) generate valuable knowledge about network security, but an abundance of false alarms and a lack of methods to capture the interdependence among alerts hampers their utility for network defense. Here, we explore a graph-based approach for fusing alerts generated by multiple IDSs (e.g., Snort, OSSEC, and Bro). Our approach generates a weighted graph of alert fields (not network topology) that makes explicit the connections between multiple alerts, IDS systems, and other cyber artifacts. We use this multi-modal graph to identify anomalous changes in the alert patterns of a network. To detect the anomalies, we apply the role-dynamics approach, which has successfully identified anomalies in social media, email, and IP communication graphs. In the cyber domain, each node (alert field) in the fused IDS alert graph is assigned a probability distribution across a small set of roles based on that node's features. A cyber attack should trigger IDS alerts and cause changes in the node features, but rather than track every feature for every alert-field node individually, roles provide a succinct, integrated summary of those feature changes. We measure changes in each node's probabilistic role assignment over time, and identify anomalies as deviations from expected roles. We test our approach using simulations including three weeks of normal background traffic, as well as cyber attacks that occur near the end of the simulations. This paper presents a novel approach to multi-modal data fusion and a novel application of role dynamics within the cyber-security domain. Our results show a drastic decrease in the false-positive rate when considering our anomaly indicator instead of the IDS alerts themselves, thereby reducing alarm fatigue and providing a promising avenue for threat intelligence in network defense.