The smartest companies now approach cybersecurity with a risk management strategy. Learn how to make policies to protect your most important digital assets. The Royal Melbourne Institute of Technology (RMIT) has announced a new online course on cybersecurity in a bid to address Australia's cybersecurity skills shortage. As part of the course, RMIT Online has partnered with the National Australia Bank (NAB) and Palo Alto Networks, with both organisations to provide mentors for the course. The course, called Cyber Security Risk and Strategy, will cover topics such as the fundamentals of cybersecurity and how to apply cybersecurity risk mitigation strategies to an organisation.
While there are innumerable cybersecurity threats, the end goal for many attacks is data exfiltration. Much has been said about using machine learning to detect malicious programs, but it's less common to discuss how machine learning can aid in identifying other types of notable threats. Critically, machine learning can be key in detecting one of the most insidious types of malicious actors – one with legitimate access to your network and systems. When properly trained, machine-learning algorithms can be used to identify insider threats and frauds before they become dangerous. When people hear the term "insider threat," many of them imagine an employee gone rogue, a disgruntled member of your team committing corporate espionage and leaking sensitive data or documents to competitors or criminals.
Artificial Intelligence (AI) has been a game changer for global businesses, opening doors to innumerable possibilities. With the integration of AI in businesses, the global economy is excepted to grow exponentially in the coming years. Although the introduction of AI into business strategies is considered a revolutionary idea, what most business executives struggle with is the proper application of AI throughout their organization in such a way that it generates maximum ROI and value. This gives rise of several questions, "How do we educate our staff about AIs? How can we acquire AI-trained employees? What is the most suitable AI strategy for our business? How do we certify our AI is trustworthy? Will there be new privacy and cybersecurity threats to deal with?".
The importance of having highly trained people in the trenches should not be underestimated. Unlike AI solutions whose outcomes are based on (admittedly huge) rule sets, people are capable of abstract thought. This is crucial when it comes to tackling cyber-attacks. Every attack currently has a human point of origination after all, and even the most sophisticated AI and machine learning algorithms can't truly understand – or hope to mimic – the chaotic and diverse nature of the human mind. Years in cyber skills training has taught me that the best talent is not necessarily that which has been classically trained.
Drones flying over populated areas, unchecked, represent a real threat to our privacy, researchers have warned. On Wednesday, academics from Israel's Ben-Gurion University of the Negev (BGU) and Fujitsu System Integration Laboratories revealed the results of a new study which examined over 200 techniques and technologies which are currently in use to detect and disable drones. BGU and Fujitsu say this is the first study of its kind, which examines how lawmakers and drone developers are attempting to control drone usage. The research, titled "Security and Privacy Challenges in the Age of Drones," (.PDF) found that cybersecurity measures developed to keep these flying camera-laden vehicles are falling woefully short. Drones are now used for military purposes, for pizza deliveries, for delivering life-saving medication, and for surveillance & monitoring in agriculture.
AI fuzzing uses machine learning and similar techniques to find vulnerabilities in an application or system. Fuzzing has been around for a while, but it's been too hard to do and hasn't gained much traction with enterprises. Adding AI promises to make the tools easier to use and more flexible. The good news is that enterprises and software vendors will have an easier time finding potentially exploitable vulnerabilities in their systems so they can fix them before bad guys get to them. The bad news is that the bad guys will have access to this technology as well and will soon start to find zero-day vulnerabilities on a massive scale.
Enterprise security has always been a cat and mouse game, with cyber adversaries constantly evolving their attack systems to get past defenses. Can AI based systems help in warding off new age threats and zero day attacks. To get a perspective, we spoke with Vikas Arora, IBM Cloud and Cognitive Software Leader, IBM India/South Asia, who shares his view on how AI can impact enterprise security. What are your views on the cyber security landscape in India? Which sectors do you think are the most vulnerable today?
Nicole Eagan believes a robot uprising draws nigh. As the chief executive of Darktrace, a cybersecurity "unicorn," or private firm valued at more than $1 billion, Eagan helps companies spot intruders in corporate networks, quarantine them, and defend data. The British firm's technology uses machine learning techniques to gain an understanding of the internal state of customers' networks and then watches for telltale deviations from the norm that may indicate foul play. While Darktrace uses A.I. techniques for defense, the company anticipates that thieves and spies will soon catch up. "I expect that we're going to see artificial intelligence used by the attackers," says Eagan, noting that there already have been "early glimpses" of that future coming to pass.
Today, more companies are choosing to implement artificial intelligence (AI) and machine learning (ML) in cybersecurity tools. In fact, a Webroot report found that 74% of businesses across the US and Japan have already done so in 2017. ML and AI could be crucial technologies in the future prevention of cyberattacks. However, a recent report found that 87% of researchers believe that it would take longer than three years before they can trust AI to lead cybersecurity decisions. According to Webroot, 73% of organisations plan to use even more AI and ML tools in 2019.
Once upon a time, cybersecurity was a function of protecting a server or two, and a handful of endpoints from a random virus every now and then. Over the past 20 years, though, the size and scope of the infrastructure most companies need to protect, and the volume of malicious threats those companies face, have skyrocketed exponentially. That's why artificial intelligence (AI) is crucial today. AI has quickly evolved from science fiction pipe dream to overhyped buzzword to business imperative in order to effectively defend against threats today. The halcyon days of a small local network that was easily defined and contained are long gone.