As AI algorithms--and the computing power that drives them--improve year-on-year, their ability to positively transform the world in which we live is unquestionable. In fact, PwC predicts that AI could contribute up to $15.7 trillion to the global economy by 2030. Indeed, as many as one-in-five (20 percent) of the 1,000 US organisations recently surveyed by PwC had plans to implement AI enterprise-wide in 2019. The PwC research also reveals how companies are increasingly initiating AI models at the very core of their production processes, in a bid to enhance operational decision-making and provide forward-looking intelligence to people in every function throughout the business. To many, this move to AI is no surprise.
Danelle is CMO at Blue Hexagon. She has more than 15 years of experience bringing new technologies to market. Prior to Blue Hexagon, Danelle was VP Marketing at SafeBreach where she built the marketing team and defined the Breach and Attack Simulation category. Previously, she led strategy and marketing at Adallom, a cloud security company acquired by Microsoft. She was also Director, Security Solutions at Palo Alto Networks, driving growth in critical IT initiatives like virtualization, network segmentation and mobility.
It may seem counter-intuitive, but the answer probably isn't a surge in employee training or hiring of cybersecurity talent. That's because humans will always make errors, and humans can't cope with the scale and stealth of today's cyberattacks. To best protect information systems, including data, applications, networks, and mobile devices, look to more automation and artificial intelligence-based software to give the defense-in-depth required to reduce risk and stop attacks. That's one of the key conclusions of a new report conducted by Oracle, "Security in the Age of AI," released in May. The report draws on a survey of 775 respondents based in the US, including 341 CISOs, CSOs, and other CXOs at firms with at least $100 million in annual revenue; 110 federal or state government policy influencers; and 324 technology-engaged workers in non-managerial roles.
Both AI and cybersecurity are broad and poorly understood fields. This book helps give you an overview of the various technologies that make up AI, where they have come from, and what AI has evolved into today. Cybersecurity is another field that has evolved over the last few decades. Dive into the world of cybersecurity and then learn how AI is being applied to the battle. When you're done reading this book, you will be spouting terms like cognitive computing, machine learning, and deep learning, and know how they apply to the cybersecurity space.
If you are reading this, you are on the internet, and this is thanks to Sir Tim Berners-Lee, True North 2019's keynote speaker. He invented the World Wide Web, which spawned the internet revolution, fundamentally changing the way we communicate and do business. With the growing adoption and breakthroughs in artificial intelligence and digital technologies, AI is poised to be the most disruptive revolution since the invention of the internet. What does this mean for humans in the workforce? University of Waterloo speakers at True North 2019 explore how AI is already transforming industry and what this means for the future of work.
How can this critical capability make or break your overall cybersecurity? And where does machine learning prove insufficient? We answer these questions and other vital inquiries on machine learning in SIEM below! "We are no longer asking the singular question of how we're managing risk and providing security to our organization. We're now being asked how we're helping the enterprise realize more value while assessing and managing risk, security and even safety. The best way to bring value to your organizations today is to leverage automation."
The vast scope of GDPR has raised fresh challenges -- chief among them is the complex interaction between AI and the GDPR. In particular, this shines a spotlight on Article 22, which concerns automated profiling and decision-making, where the incorrect use of personal data can have huge ramifications for the individuals concerned. The problem is that existing AI system logic takes automated decisions without user consent. Since data is the engine behind AI, Article 22 impacts every industry hoping to leverage the power of technology to drive efficiencies through automated means. In an increasingly data-reliant business landscape, how can organisations reconcile the advent of disruptive technologies and their inherent risks while remaining fully compliant?
Artificial intelligence (AI) is already playing a role in combatting malware and other threats. Through machine learning, AI can now do more than just add malware samples to security software. It can also detect future versions and similar variants of the same malware. But what if the very AI that helps organizations fight these threats was co-opted by cybercriminals? What if malware became smarter and tougher and almost undetectable through AI?
After years of hype around AI and machine learning, skepticism, and a focus on practical applications of the technology are now taking center stage. In the security industry, this was abundantly clear at the recent RSA Conference where 45,000 people and a thousand vendors descended on San Francisco to discuss industry challenges and debate over the best solutions. Despite the many voices contending for attention at the show, there was little to no dispute that the cybersecurity skills gap continues to be one of the industry's biggest challenges. But, here's what is next for AI. An (ISC)² report released during the conference says there are 2.93 million cybersecurity positions open and unfilled around the world.
Risk-related AI applications (risk management, lending, compliance, fraud and cybersecurity) account for 72% of the total $2.8 billion in funds raised for AI vendor companies in banking, according to the latest report by Emerj Artificial Intelligence Research. Compliance and fraud-related applications make up 32% of the total AI vendor landscape in banking, but banks report these applications as a mere 19% of their current AI initiatives. Bankers today see AI as a risk-reduction technology. Their AI initiatives are likely to yield negative ROI in part because they hold naive views about AI's integration and data requirements. Banks are eager to automate compliance specifically, especially given recent data privacy laws such as GDPR.