Artificial intelligence (AI) has come a long way since its humble beginnings. Once thought to be a technology that would struggle to find its place in the real world, it is now all around us. It can influence the ads we see, the purchases we make and the television we watch. It's also fast becoming firmly embedded in our working lives -- particularly in the world of cyber security. The Capgemini Research Institute recently found that one in five organisations used AI cyber security pre-2019, with almost two-thirds planning to implement it by 2020.
Since cybersecurity threats have become a topic of nightly newscasts, no longer is anyone shocked by their scope and veracity. What is shocking is the financial damage the attacks are predicted to cause as they reverberate throughout the economy (I know how terrible this type of crime can be. I myself have been the victim of a data theft by hackers who stole my deceased father's medical files, running up more than $300,000 in false charges. I am still disputing on-going bills that have been accruing for the last 15 years). Cybersecurity Ventures predicts global annual cyber-crime costs will grow from $3 trillion in 2015 to $6 trillion annually by 2021, which includes damage and destruction of data, stolen money, lost productivity and theft of intellectual property, personal and financial data, embezzlement and fraud.
Companies and public sector organisations say they have no choice but to automate their cyber defences as hacking become increasingly sophisticated. Security professionals can no longer keep pace with the volume and sophistication of attacks on computer systems. In a study of 850 security professionals across 10 countries, more than half said their organisations are overwhelmed with data. So they are turning to machine-learning technologies that can identify cyber attacks by analysing huge quantities of network data and have the potential to block attacks automatically. By 2020, two out of three companies plan to deploy cyber security defences incorporating machine learning and other forms of artificial intelligence (AI), according to the Capgemini study, Reinventing cyber security with artificial intelligence.
From deep learning neural networks to artificial intelligence-based facial recognition, artificial intelligence has taken leaps and bounds in 2016. Virtual assistants and autopilot driving services are already influencing our lives, and the pace of innovation is frightening to some – but exciting for others. But if you work in cyber security, these advances also herald more challenging times ahead. Cyber security is already one of the top business risks today, and adding artificial intelligence (AI) to the hacker's already-sophisticated toolkit will make the job of defending against cyber attackers harder still. Modern hackers don't just target governments or large organisations – they can infiltrate any network activity and impact public services and individuals too.
In early March 2020, UK artificial intelligence (AI) security startup Darktrace was able to contain the spread of a sophisticated attack by Chinese cyber espionage and cyber crime group APT41 exploiting a zero-day vulnerability in Zoho ManageEngine. In a blog post describing the attack, Max Heinemeyer, director of threat hunting at Darktrace, wrote: "Without public indicators of compromise (IoCs) or any open source intelligence available, targeted attacks are incredibly difficult to detect. Even the best detections are useless if they cannot be actioned by a security analyst at an early stage. Too often, this occurs because of an overwhelming volume of alerts, or simply because the skills barrier to triage and investigation is too high." Heinemeyer says Darktrace's Cyber AI platform was able to detect the subtle signs of this targeted, unknown attack at an early stage, without relying on prior knowledge.