Goto

Collaborating Authors

Results


Security Think Tank: AI cyber attacks will be a step-change for criminals

#artificialintelligence

Whether or not your organisation suffers a cyber attack has long been considered a case of'when, not if', with cyber attacks having a huge impact on organisations. In 2018, 2.8 billion consumer data records were exposed in 342 breaches, ranging from credential stuffing to ransomware, at an estimated cost of more than $654bn. In 2019, this had increased to an exposure of 4.1 billion records. While the use of artificial intelligence (AI) and machine learning as a primary offensive tool in cyber attacks is not yet mainstream, its use and capabilities are growing and becoming more sophisticated. In time, cyber criminals will, inevitably, take advantage of AI, and such a move will increase threats to digital security and increase the volume and sophistication of cyber attacks.


Challenges in Forecasting Malicious Events from Incomplete Data

arXiv.org Machine Learning

The ability to accurately predict cyber-attacks would enable organizations to mitigate their growing threat and avert the financial losses and disruptions they cause. But how predictable are cyber-attacks? Researchers have attempted to combine external data -- ranging from vulnerability disclosures to discussions on Twitter and the darkweb -- with machine learning algorithms to learn indicators of impending cyber-attacks. However, successful cyber-attacks represent a tiny fraction of all attempted attacks: the vast majority are stopped, or filtered by the security appliances deployed at the target. As we show in this paper, the process of filtering reduces the predictability of cyber-attacks. The small number of attacks that do penetrate the target's defenses follow a different generative process compared to the whole data which is much harder to learn for predictive models. This could be caused by the fact that the resulting time series also depends on the filtering process in addition to all the different factors that the original time series depended on. We empirically quantify the loss of predictability due to filtering using real-world data from two organizations. Our work identifies the limits to forecasting cyber-attacks from highly filtered data.


Semisupervised Adversarial Neural Networks for Cyber Security Transfer Learning

arXiv.org Machine Learning

On the path to establishing a global cybersecurity framework where each enterprise shares information about malicious behavior, an important question arises. How can a machine learning representation characterizing a cyber attack on one network be used to detect similar attacks on other enterprise networks if each networks has wildly different distributions of benign and malicious traffic? We address this issue by comparing the results of naively transferring a model across network domains and using CORrelation ALignment, to our novel adversarial Siamese neural network. Our proposed model learns attack representations that are more invariant to each network's particularities via an adversarial approach. It uses a simple ranking loss that prioritizes the labeling of the most egregious malicious events correctly over average accuracy. This is appropriate for driving an alert triage workflow wherein an analyst only has time to inspect the top few events ranked highest by the model. In terms of accuracy, the other approaches fail completely to detect any malicious events when models were trained on one dataset are evaluated on another for the first 100 events. While, the method presented here retrieves sizable proportions of malicious events, at the expense of some training instabilities due in adversarial modeling. We evaluate these approaches using 2 publicly available networking datasets, and suggest areas for future research.


Use of Artificial Intelligence Techniques / Applications in Cyber Defense

arXiv.org Artificial Intelligence

Nowadays, considering the speed of the processes and the amount of data used in cyber defense, it cannot be expected to have an effective defense by using only human power without the help of automation systems. However, for the effective defense against dynamically evolving attacks on networks, it is difficult to develop software with conventional fixed algorithms. This can be achieved by using artificial intelligence methods that provide flexibility and learning capability. The likelihood of developing cyber defense capabilities through increased intelligence of defense systems is quite high. Given the problems associated with cyber defense in real life, it is clear that many cyber defense problems can be successfully solved only when artificial intelligence methods are used. In this article, the current artificial intelligence practices and techniques are reviewed and the use and importance of artificial intelligence in cyber defense systems is mentioned. The aim of this article is to be able to explain the use of these methods in the field of cyber defense with current examples by considering and analyzing the artificial intelligence technologies and methodologies that are currently being developed and integrating them with the role and adaptation of the technology and methodology in the defense of cyberspace.


6 artificial intelligence cybersecurity tools you need to know Packt Hub

#artificialintelligence

Recently, most of the organizations experienced severe downfall due to an undetected malware, Deeplocker, which secretly evaded even the stringent cyber security mechanisms. Deeplocker leverages the AI model to attack the target host by using indicators such as facial recognition, geolocation and voice recognition. This incidence speaks volumes about the big role AI plays in the cybersecurity domain. In fact, some may even go on to say that AI for cybersecurity is no longer a nice to have tech rather a necessity. Large and small organizations and even startups are hugely investing in building AI systems to analyze the huge data trove and in turn, help their cybersecurity professionals to identify possible threats and take precautions or immediate actions to solve it.


The artificial reality of cyber defence

#artificialintelligence

Attacks are getting more complex. This is especially true when it comes to cyberwar, so much so that government sponsored attacks have been bolstered by research investments that approach military proportions. Just look at the recent report published by the US State Department, which said that strategies for stopping cyber attacks need to be fundamentally reconsidered in light of complex cyber threats posed by rival states. In order to detect and stop these attacks, innovation is required. I say that because anomaly detection based on traditional correlation rules often results in too many false positives and events that can reasonably be manually reviewed.


IDG Connect The future of machine learning in cybersecurity: What can CISOs expect?

#artificialintelligence

August saw the Defense Advanced Research Projects Agency (DARPA) host its first Cyber Grand Challenge – the first hacking competition not involving people. During this event, teams left their systems alone to single-handedly find, diagnose and fix software flaws in real time. Elsewhere, researchers at MIT are not only developing machine learning systems that automatically mine dark web marketplaces for vulnerabilities and zero-day attacks and reports them back as well as software that automatically fixes buggy code, but also a platform that can predict 85% of cyber-attacks. Machine learning, deep learning, and Artificial Intelligence (AI) are hot topics at the moment, and while there's plenty of research going on, there's also some practical applications that can be deployed right now to make life easier for cybersecurity professionals. A glut of new start-ups, from the likes of Darktrace, Cylance, Deep Instinct, and HackerONE, plus established player such as FireEye, IBM, and Forcepoint, are all working on bringing self-learning systems into the world of security.