The intersection of AI and cybersecurity is a subject of growing concern in the industry, particularly on how AI can be used to mitigate attacks and neutralize threats. Many stakeholders are coming to terms with the fact that AI can be a force of evil too. According to BCG, over 90% of cybersecurity professionals in the US and Japan expect attackers to start using AI to launch attacks. And this is, in fact, becoming a reality already. AI presents big opportunities for cyber attackers, allowing them to increase attacks in terms of speed, volume, and sophistication to massive proportions.
We talked to Gatefy's team of cybersecurity experts to create a prediction of events and threats that are most likely to impact 2020. You can check the result below. At first, we anticipate that some methods and threats already known and widely used by digital intruders are still on the rise. In addition, our team points out that the increasing migration to cloud platforms will probably increase the number of data breaches. Machine learning and big data are indispensable components when it comes to protection and security.
CIO Magazine reports that the BFSI sector will be the key target of cyber-criminals this year. They write, "BFSI companies were possibly the biggest target of cyber-criminals over the last couple of years. The trend is likely to continue as cyber-criminals will continue to find innovative ways to steal identity and money." Until recently, cybersecurity measures in the banking industry have been far from the latest technological advancements. Yet, increasingly creative security breaches and continuous frauds in the financial services industry have forced the BFSI to reboot their security strategies and adopt new technologies for their cyber security management.
Individuals, businesses, and governments are unprepared for the coming wave of deepfake attacks by malicious cyber criminals. For the unaware, "deepfake" refers to artificial intelligence generated false media that pretends to be the authentic version of what it emulates. In English…it is fake pictures, videos, and audio of real people or the creation of fake people in the same mediums. Commanding several of the seven traditional patterns of artificial intelligence, deepfakes use algorithms that mimic voice, mannerisms, facial expressions, body language, and lip movements to look deceptively real; creating audio and video clips of events that never occurred. These images are spread on social media and in the news, as most viewers are totally unaware of the lack of authenticity.
Phishing attacks rely on one common flaw we all suffer from -- we are human. As humans, we suffer from common frailties -- across an organisation, people, human beings, are susceptible. A spear phishing campaign may put the fear of God, or at least fear of being fired up us. An aggressive email from the CEO demanding that a junior member of staff transfers some money, or sends sensitive information, may reduce the unfortunate recipient to a bag of nerves. Being rational, why would the CEO do that?