Military


Don't Trust Artificial Intelligence? Time To Open The AI 'Black Box'

#artificialintelligence

Despite its promise, the growing field of Artificial Intelligence (AI) is experiencing a variety of growing pains. In addition to the problem of bias I discussed in a previous article, there is also the'black box' problem: if people don't know how AI comes up with its decisions, they won't trust it. In fact, this lack of trust was at the heart of many failures of one of the best-known AI efforts: IBM Watson – in particular, Watson for Oncology. Experts were quick to single out the problem. "IBM's attempt to promote its supercomputer programme to cancer doctors (Watson for Oncology) was a PR disaster," says Vyacheslav Polonski, Ph.D., UX researcher for Google and founder of Avantgarde Analytics.


Don't Trust Artificial Intelligence? Time To Open The AI 'Black Box'

#artificialintelligence

Despite its promise, the growing field of Artificial Intelligence (AI) is experiencing a variety of growing pains. In addition to the problem of bias I discussed in a previous article, there is also the'black box' problem: if people don't know how AI comes up with its decisions, they won't trust it. In fact, this lack of trust was at the heart of many failures of one of the best-known AI efforts: IBM Watson – in particular, Watson for Oncology. Experts were quick to single out the problem. "IBM's attempt to promote its supercomputer programme to cancer doctors (Watson for Oncology) was a PR disaster," says Vyacheslav Polonski, PhD, UX researcher for Google and founder of Avantgarde Analytics.


workshopmlai.wp.imt.fr

#artificialintelligence

The joint availability of computational power and huge datasets has considerably changed the landscape of Artificial Intelligence. In many fields, applications (self-driving cars, cybersecurity, e-health…) that seemed out of reach in the past are now closer to becoming a reality. Recent advances in Machine Learning, the key component of AI, show the growing maturity of algorithms that are now able to handle an increasing number of new tasks. However, simple adversarial attacks can still easily defeat a learning algorithm and the potentially massive deployment of AI tools in various environments raises many new concerns. Additionally to scalability and versatility of algorithms, awareness of drifting or fake data, privacy, interpretability, accountability are now all features that a learning and decision system should take into account.


Don't Trust Artificial Intelligence? Time To Open The AI 'Black Box'

#artificialintelligence

Despite its promise, the growing field of Artificial Intelligence (AI) is experiencing a variety of growing pains. In addition to the problem of bias I discussed in a previous article, there is also the'black box' problem: if people don't know how AI comes up with its decisions, they won't trust it. In fact, this lack of trust was at the heart of many failures of one of the best-known AI efforts: IBM Watson – in particular, Watson for Oncology. Experts were quick to single out the problem. "IBM's attempt to promote its supercomputer programme to cancer doctors (Watson for Oncology) was a PR disaster," says Vyacheslav Polonski, PhD, UX researcher for Google and founder of Avantgarde Analytics.


Human Intentionality, Artificial Intelligence and the Future of Cybersecurity

#artificialintelligence

Artificial Intelligence is helping both attackers and their targets – shifting the nature of attack surfaces without being able to weaken the intention of those wishing to benefit from its crashes and failures. Cyber defense expert will perhaps become the ultimate profession in charge of continuously understanding and monitoring what's happening under the hood with AI. Should we expect a digital world entirely administered by so-called "strong" Artificial Intelligence, capable of ensuring its own infrastructure's security and reliable enough to fully ensure that of human users? A true singularity moment in which there would be absolutely no interaction of the human operator with the artificial brain – whether for its maintenance or its evolution. Before that horizon, which seems far away and would not necessarily be socially desirable, the human operator – user, administrator, or designer – will unfortunately remain in capacity of compromising (intentionally or not) the integrity and the efficiency of the multiple AI processes in charge of our digital security.


Online Cyber-Attack Detection in Smart Grid: A Reinforcement Learning Approach

arXiv.org Machine Learning

Early detection of cyber-attacks is crucial for a safe and reliable operation of the smart grid. In the literature, outlier detection schemes making sample-by-sample decisions and online detection schemes requiring perfect attack models have been proposed. In this paper, we formulate the online attack/anomaly detection problem as a partially observable Markov decision process (POMDP) problem and propose a universal robust online detection algorithm using the framework of model-free reinforcement learning (RL) for POMDPs. Numerical studies illustrate the effectiveness of the proposed RL-based algorithm in timely and accurate detection of cyber-attacks targeting the smart grid.


Adversarial Attacks on Node Embeddings

arXiv.org Machine Learning

The goal of network representation learning is to learn low-dimensional node embeddings that capture the graph structure and are useful for solving downstream tasks. However, despite the proliferation of such methods there is currently no study of their robustness to adversarial attacks. We provide the first adversarial vulnerability analysis on the widely used family of methods based on random walks. We derive efficient adversarial perturbations that poison the network structure and have a negative effect on both the quality of the embeddings and the downstream tasks. We further show that our attacks are transferable -- they generalize to many models -- and are successful even when the attacker has restricted actions.


Defensive Dropout for Hardening Deep Neural Networks under Adversarial Attacks

arXiv.org Machine Learning

Deep neural networks (DNNs) are known vulnerable to adversarial attacks. That is, adversarial examples, obtained by adding delicately crafted distortions onto original legal inputs, can mislead a DNN to classify them as any target labels. This work provides a solution to hardening DNNs under adversarial attacks through defensive dropout. Besides using dropout during training for the best test accuracy, we propose to use dropout also at test time to achieve strong defense effects. We consider the problem of building robust DNNs as an attacker-defender two-player game, where the attacker and the defender know each others' strategies and try to optimize their own strategies towards an equilibrium. Based on the observations of the effect of test dropout rate on test accuracy and attack success rate, we propose a defensive dropout algorithm to determine an optimal test dropout rate given the neural network model and the attacker's strategy for generating adversarial examples.We also investigate the mechanism behind the outstanding defense effects achieved by the proposed defensive dropout. Comparing with stochastic activation pruning (SAP), another defense method through introducing randomness into the DNN model, we find that our defensive dropout achieves much larger variances of the gradients, which is the key for the improved defense effects (much lower attack success rate). For example, our defensive dropout can reduce the attack success rate from 100% to 13.89% under the currently strongest attack i.e., C&W attack on MNIST dataset.


The Pentagon is investing $2 billion into artificial intelligence

#artificialintelligence

The Pentagon's high-tech research agency laid the groundwork for the Internet, stealth aircraft and self-driving cars. At its 60th anniversary conference on Friday, DARPA announced a $2 billion investment to push the frontier of AI forward. "We think it's a good time to seed the field of AI," John Everett, the deputy director of DARPA's Information Innovation Office, told CNNMoney. "We think we can accelerate two decades of progress into five years." Artificial intelligence, which lets machines perform tasks traditionally done by humans, is a trendy topic in technology and business circles.


5 Artificial Intelligence and Machine Learning Use-Cases for Cybersecurity

#artificialintelligence

Evolving cybersecurity risks and trends continue to force organizations to adopt waves of precautionary changes and solutions to protect themselves, further forcing organizations to adapt, evolve or fall behind in the marketplace. Rapid technological development and adoption across diverse industries presents additional developmental challenges for organizational cybersecurity processes and systems. Employing Artificial Intelligence (AI) and Machine Learning (ML) can aid in facilitating more secure environments where digital transformation and technologies are being utilized to enhance automation and growth capabilities in compliance with various enterprise models. Risk and mitigation strategies within corporate structures should require rapid inclusion of deep learning (DL), ML, and other AI to reduce human error prevalent in the behaviors individuals possess. To err is to be human and it presents additional complexity within the cybersecurity infrastructure of any organization.