detect insider threat
Leverage Machine Learning to Detect Insider Threats
For an insider threat program to benefit from ML algorithms, first it must train and implement them. To succeed, machine learning algorithms must be trained against pre-collected, validated data sets. Collection, validation, and training all tend to be difficult and time-consuming. This is one of many areas where the benefits of data mesh come into play. Today, data collection happens continuously and at high volumes across a vast number of sources which must be governed and exposed to extract value.
Insider Detection using Deep Autoencoder and Variational Autoencoder Neural Networks
Pantelidis, Efthimios, Bendiab, Gueltoum, Shiaeles, Stavros, Kolokotronis, Nicholas
Insider attacks are one of the most challenging cybersecurity issues for companies, businesses and critical infrastructures. Despite the implemented perimeter defences, the risk of this kind of attack is still very high. In fact, the detection of insider attacks is a very complicated security task and presents a serious challenge to the research community. In this paper, we aim to address this issue by using deep learning algorithms Autoencoder and Variational Autoencoder deep. We will especially investigate the usefulness of applying these algorithms to automatically defend against potential internal threats, without human intervention. The effectiveness of these two models is evaluated on the public dataset CERT dataset (CERT r4.2). This version of the CERT Insider Threat Test dataset includes both benign and malicious activities generated from 1000 simulated users. The comparison results with other models show that the Variational Autoencoder neural network provides the best overall performance with a greater detection accuracy and a reasonable false positive rate
- North America > United States > Hawaii (0.04)
- Europe > United Kingdom > England > Hampshire > Portsmouth (0.04)
- Europe > Middle East > Cyprus > Nicosia > Nicosia (0.04)
- Europe > Greece (0.04)
Australian Cyber Engineers Use IBM Watson To Detect Insider Threats Across Platforms - Which-50
Australian IBM cybersecurity engineers have developed an artificial intelligence (AI) system to analyse network connections and employee communications at an enterprise scale. The model detects changes in users' behaviour and can automatically triggers investigations even if the changes occur across multiple platforms. IBM research found the root cause for 52 per cent of data breaches in Australia was malicious or criminal attacks which often use methods like phishing and social engineering. The new IBM solution, developed in the company's Gold Coast cybersecurity lab as part of a hackathon, uses AI to monitor changes in employee behaviour and flags indicators of compromise. It was debuted to the industry at last week's Australian Cyber Conference in Melbourne as a way of showing what can be done but the solution is not something that can be bought directly from IBM. Currently known as "QRadar Insider Threat Detector with Watson" it uses IBM's AI model, Watson, to analyse user generated content – like emails, Word documents, and Slack messages – to detect both the tone of content and employees' typical behaviour or "personalities".
Detecting low and slow insider threats
In my last post I discussed how machine learning could be used to detect phishing-based account compromise attacks using a real-world use case from the recent Verizon Data Breach Digest. This time I'll examine how to detect insider threats using similar techniques. The example I've chosen involves an organization in the middle of a buyout that was using retention contracts to prevent employee attrition. To find out what other employees were being offered, a middle manager acquired IT administrator credentials from a colleague and friend. He used these credentials to access the company's onsite spam filter and spy on the CEO's incoming email. The same credentials were also used to browse sensitive file shares and conduct other unauthorized actions.