If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
The Defense Advanced Research Projects Agency (DARPA) recently launched the Explainable Artificial Intelligence (XAI) program that aims to create a suite of new AI techniques that enable end users to understand, appropriately trust, and effectively manage the emerging generation of AI systems. In this paper, inspired by DARPA's XAI program, we propose a new paradigm in security research: Explainable Security (XSec). We discuss the ``Six Ws'' of XSec (Who? What? Where? When? Why? and How?) and argue that XSec has unique and complex characteristics: XSec involves several different stakeholders (i.e., the system's developers, analysts, users and attackers) and is multi-faceted by nature (as it requires reasoning about system model, threat model and properties of security, privacy and trust as well as about concrete attacks, vulnerabilities and countermeasures). We define a roadmap for XSec that identifies several possible research directions.
While security as a percentage of IT spend continues to grow at a robust rate, the cost of security breaches is growing even faster. Organizations are spending close to $100 billion on a dizzying array of security products. In fact, it is not uncommon for CISO organizations to have 30 to 40 security products in their environment. However, if you ask chief information security officers how they feel about their security risk, they will express concerns over being highly exposed and vulnerable. Artificial intelligence (AI) and machine learning (ML) can offer IT security professionals a way to enforce good cybersecurity practices and shrink the attack surface instead of constantly chasing after malicious activity.
My new report from 451 Research – 'The Current and Future State of AI and Machine Learning' brings together the key points about AI & ML I've been writing, speaking to clients and presenting about for the past two years. If you agree that technology adoption – like so many other trends – takes the form of an S curve then I think it's worth asking ourselves, where are we on the S curve of adoption for AI and machine learning? It's impossible to know for certain of course, but my somewhat educated guess is that we are very early. Now the point of this bit of cod-science isn't to spark a debate as to whether we should be a few millimetres to the left or right. Rather, it serves to demonstrate that we're early in the evolution of machine learning and its use may be barely perceptible to some – even those in the technology industry.
As our businesses become more digital in all dimensions, high-profile information security breaches are making the news headlines with increasing frequency. The recently-announced card hacking activity at online travel service Orbitz is just one of the latest examples. On March 20, 2018 Orbitz announced a security breach that exposed information derived from at least 880,000 customer payment cards. The breach took place between October and December of 2017, involving customer transaction records dating from 2016 and 2017. Although data captured on Orbitz.com was not affected, the company advised customers using Orbitz travel services within the past two years to check their credit and debit card billing statements from this period and to contact their banks if fraudulent charges were identified.
Digital transformation is a fact of every business today. While the transformation of business has been well documented, it is less clear how technology, and what technologies are going to drive this process. Recently, research from Grant Thorton, the world's fifth largest professional services network of independent accounting and consulting member firms based in the London, UK, has quantified how much digital transformation will cost, how it will impact financially on enterprises and what the ultimate shape of digital enterprises will be. The research, which was published at the end of February, surveyed 304 CFOs and other senior financial leaders, from companies with revenues between $100 million and over $20 billion.
In January Google's parent company, Alphabet, announced the launch of Chronicle – an artificial intelligence-based solution for the cybersecurity industry – promising "the power to fight cyber crime on a global scale." There are mixed opinions on the value and readiness of artificial intelligence (AI) in our industry. Just last year Google's own Heather Adkins, director of information security and privacy addressed the crowd at TechCrunch Disrupt 2017 and criticized the over use of artificial intelligence for the cybersecurity industry. Adkins argued that the implementation of artificial intelligence relies too heavily on feedback, "to learn what is good and bad…but we're not sure what good and bad is." She went on to say that companies should invest in more human talent and less technology.
Editor's Note: The following blog post is a summary of a presentation from RFUN 2017 featuring Staffan Truvé, CTO and co-founder of Recorded Future, and Chris Poulin, principal/director at Booz Allen Hamilton. Artificial intelligence is about constantly trying to push the technology barrier -- once you actually succeed, however, it can be challenging to find the next new territory. There are a multitude of tricky questions to answer in dealing with artificial intelligence, so fortunately, there is no shortage of work in the field. Artificial intelligence is a complex contradiction in that it simultaneously deals with solving simple problems that are just repetitive, human tasks, while at the same time trying to push machines beyond human capability. Staffan Truvé, CTO and co-founder of Recorded Future, recently shared his expertise at the company's annual threat intelligence conference in D.C.
Heading into 2018, some of the most prominent voices in information security predicted a'machine learning arms race' wherein adversaries and defenders frantically work to gain the edge in machine learning capabilities. Despite advances in machine learning for cyber defense, "adversaries are working just as furiously to implement and innovate around them." This looming'arms race' points to a larger narrative about how artificial intelligence (AI) and machine learning (ML) -- as tools of automation in any domain and in the hands of any user -- are dual-use in nature, and can be used to disrupt the status quo. Like most technologies, not only does AI and ML provide more convenience and security as a tool for consumers, but each can be exploited by nefarious actors as well.
What is Cryptojacking and Why Is It a Cybersecurity Risk? Artificial intelligence is already redefining cybersecurity, exposing sophisticated attacks and adding a level of Terminator-style relentlessness to threat detection tools and anti-malware software. AI is even being used by a startup to scou...