Goto

Collaborating Authors

A 20-Year Community Roadmap for Artificial Intelligence Research in the US

arXiv.org Artificial Intelligence

Decades of research in artificial intelligence (AI) have produced formidable technologies that are providing immense benefit to industry, government, and society. AI systems can now translate across multiple languages, identify objects in images and video, streamline manufacturing processes, and control cars. The deployment of AI systems has not only created a trillion-dollar industry that is projected to quadruple in three years, but has also exposed the need to make AI systems fair, explainable, trustworthy, and secure. Future AI systems will rightfully be expected to reason effectively about the world in which they (and people) operate, handling complex tasks and responsibilities effectively and ethically, engaging in meaningful communication, and improving their awareness through experience. Achieving the full potential of AI technologies poses research challenges that require a radical transformation of the AI research enterprise, facilitated by significant and sustained investment. These are the major recommendations of a recent community effort coordinated by the Computing Community Consortium and the Association for the Advancement of Artificial Intelligence to formulate a Roadmap for AI research and development over the next two decades.



Artificial intelligence (AI) and cognitive computing: what, why and where

#artificialintelligence

Although artificial intelligence (as a set of technologies, not in the sense of mimicking human intelligence) is here since a long time in many forms and ways, it's a term that quite some people, certainly IT vendors, don't like to use that much anymore – but artificial intelligence is real, for your business too. Instead of talking about artificial intelligence (AI) many describe the current wave of AI innovation and acceleration with – admittedly somewhat differently positioned – terms and concepts such as cognitive computing or focus on several real-life applications of artificial intelligence that often start with words such as "smart" (omni-present in anything related to the IoT as well), "intelligent", "predictive" and, indeed, "cognitive", depending on the exact application – and vendor. Despite the term issues, artificial intelligence is essential for and in, among others, information management, healthcare, life sciences, data analysis, digital transformation, security (cybersecurity and others), various consumer applications, next gen smart building technologies, FinTech, predictive maintenance, robotics and so much more. On top of that, AI is added to several other technologies, including IoT and big, as well as, small data analytics. There are many reasons why several vendors doubt using the term artificial intelligence for AI solutions/innovations and often package them in another term (trust us, we've been there). Artificial intelligence (AI) is a term that has somewhat of a negative connotation in general perception but also in the perception of technology leaders and firms.


The artificial reality of cyber defence

#artificialintelligence

Attacks are getting more complex. This is especially true when it comes to cyberwar, so much so that government sponsored attacks have been bolstered by research investments that approach military proportions. Just look at the recent report published by the US State Department, which said that strategies for stopping cyber attacks need to be fundamentally reconsidered in light of complex cyber threats posed by rival states. In order to detect and stop these attacks, innovation is required. I say that because anomaly detection based on traditional correlation rules often results in too many false positives and events that can reasonably be manually reviewed.


An Anomaly Contribution Explainer for Cyber-Security Applications

arXiv.org Machine Learning

--In this paper we introduce Anomaly Contribution Explainer or ACE, a tool to explain security anomaly detection models in terms of the model features through a regression framework, and its variant, ACE-KL, which highlights the important anomaly contributors. ACE and ACE-KL provide insights in diagnosing which attributes significantly contribute to an anomaly by building a specialized linear model to locally approximate the anomaly score that a black-box model generates. We conducted experiments with these anomaly detection models to detect security anomalies on both synthetic data and real data. In particular, we evaluate performance on three public data sets: CERT insider threat, netflow logs, and Android malware. The experimental results are encouraging: our methods consistently identify the correct contributing feature in the synthetic data where ground truth is available; similarly, for real data sets, our methods point a security analyst in the direction of the underlying causes of an anomaly, including in one case leading to the discovery of previously overlooked network scanning activity. We have made our source code publicly available. Cyber-security is a key concern for both private and public organizations, given the high cost of security compromises and attacks; malicious cyber-activity cost the U.S. economy between $57 billion and $109 billion in 2016 [1]. As a result, spending on security research and development, and security products and services to detect and combat cyber-attacks has been increasing [2]. Organizations produce large amounts of network, host and application data that can be used to gain insights into cyber-security threats, misconfigurations, and network operations. While security domain experts can manually sift through some amount of data to spot attacks and understand them, it is virtually impossible to do so at scale, considering that even a medium sized enterprise can produce terabytes of data in a few hours.