Goto

Collaborating Authors

To cripple AI, hackers are turning data against itself

#artificialintelligence

A neural network looks at a picture of a turtle and sees a rifle. A self-driving car blows past a stop sign because a carefully crafted sticker bamboozled its computer vision. An eyeglass frame confuse facial recognition tech into thinking a random dude is actress Milla Jovovich. The hacking of artificial intelligence is an emerging security crisis. Pre-empting criminals attempting to hijack artificial intelligence by tampering with datasets or the physical environment, researchers have turned to adversarial machine learning.


How IBM Wants to Defend Neural Networks Against Adversarial Attacks

#artificialintelligence

The security and robustness of deep neural networks(DNNs) architectures is one of the most important areas of research in the deep learning field. The native complexity of neural networks and its lack of interpretability makes them vulnerable to many forms of attacks. Some of the most sophisticated and scariest forms of attacks on DNNs are generated using other neural networks. Adversarial neural networks(ANNs) are often used to generate numerous attack vectors on DNNs by manipulating aspects such as the input dataset of the training policy. Protecting against adversarial attacks is far from being an easy endeavor as the attackers are always mutating and evolving.


Adversarial Learning for Good: My Talk at #34c3 on Deep Learning Blindspots

#artificialintelligence

When I first was introduced to the idea of adversarial learning for security purposes by Clarence Chio's 2016 DEF CON talk and his related open-source library deep-pwning, I immediately started wondering about applications of the field to both make robust and well-tested models, but also as a preventative measure against predatory machine learning practices in the field.


Google & Johns Hopkins University Can Adversarial Examples Improve Image Recognition?

#artificialintelligence

A fundamental concept in Chinese philosophy and culture is Yin and Yang -- the belief that harmony is achieved when opposites coexist and share elements of the other. This can be interpreted to suggest that purpose and goodness can be found even in stuff like floodwaters, mosquitoes, and -- in the world of artificial intelligence -- adversarial examples. Adversarial examples are perturbations added to an image that are invisible to the human eye but can trick a computer vision system into misclassifying objects -- potentially causing for example an autonomous vehicle to drive through a stop sign. Adversarial examples are a bane to the researchers who build the neural networks that deliver much of today's advanced AI. Now, a team from Google and Johns Hopkins University says it has found a silver lining to adversarial examples.


The Upside of Adversarial Attacks

#artificialintelligence

All deep learning systems are vulnerable to adversarial attacks; while cause for concern, it also sparks research that may lead to better, more accountable, artificial intelligence. All deep learning systems are vulnerable to adversarial attacks, researchers warn. Tiny alterations to the input can cause these neural networks to classify pictures or other data totally incorrectly. While cause for concern, this also sparks research that may lead to better, more accountable, artificial intelligence. Artificial intelligence (AI) based on neural networks has made spectacular progress in recent years.