"Human analysis is very limited. We quickly get overwhelmed," says Leyla Bilge, a member of the Symantec Research Labs whose team studies the future use of artificial intelligence in blocking attacks. "AI on the other hand can handle millions of calculations in a second. It can identify malicious activity that humans miss." The good news is that advances in AI, machine learning, and advanced behavioral analytics may change the equation in security's favor.
I know how terrible healthcare records theft can be. I myself have been the victim of a data theft by hackers who stole my deceased father's medical files, running up more than $300,000 in false charges. I am still disputing on-going bills that have been accruing for the last 15 years. This event led me on the path to finding a solution so others would not suffer the consequences that I continue to be impacted by, but hospitals and other healthcare providers must be willing to make the change. The writing is on the wall.
Second in a series of two articles about the history of signature-based detections and how the methodology has evolved to identify different types of cybersecurity threats. Many security vendors are now applying increasingly sophisticated machine learning elements into their cloud-based analysis and classification systems, and into their products. All of these techniques have already proven their value in Internet search, targeted advertising and social networking business arenas. For example, supervised learning models lie at the heart of ensuring that the best and most applicable results are returned when searching for the phrase "never going to give you up." In the information security world, supervised learning models are a natural progression of the one, two, and multi-dimensional signature systems discussed in my earlier article.
Microsoft has been investing heavily in next-generation security technologies. These technologies use our ability to consolidate large sets of data and build intelligent systems that learn from that data. These machine learning (ML) systems flag and surface threats that would otherwise remain unnoticed amidst the continuous hum of billions of normal events and the inability of first-generation sensors to react to unfamiliar and subtle stimuli. By augmenting expert human analysis, machine learning has driven an antimalware evolution within Windows Defender Antivirus, providing close to real-time detection of unknown, highly polymorphic malware. At the same time, machine learning has also enhanced how Windows Defender Advanced Threat Protection (Windows Defender ATP) is catching advanced attacks, including apex attacker activities that typically reside only in memory or are camouflaged as events triggered by common tools and everyday applications.
ML can help provide more comprehensive context-rich detections of the few bad actors already in your network. Compromises will continue in 2018, and machine learning will continue to grow in intelligent sifting through alert information to detect them. And in some cases, the ML can help the security team automatically or semi-automatically resolve them. Unfortunately, I think this is one area in which an adversarial actor using ML has the upper-hand: ML might be creating some of the problems here in 2018, but also, ML will be used more for detecting social media manipulation and automated phishing and spear-phishing attacks. This is one area where ML isn't required to get pretty good detection. Since ransomware always has to encrypt your files (a behavior that can be monitored) in order to set up the ransom, rules and ML tools can be used to try to determine when such disk activity is indicative of something malicious. The hope is that ML might do a better job than rules of detecting new behavioral signatures as evil.