Robust AI: Protecting neural networks against adversarial attacks

#artificialintelligence

In its latest annual report, filed with the Securities and Exchange Commission, tech giant Alphabet warned investors against the many challenges of artificial intelligence, following the lead of Microsoft, which issued similar warnings last August. Recent advances in deep learning and neural networks have created much hope about the possibilities that AI presents to various domains that were previously thought to be off the limits for computer software. But there's also concern about new threats AI will pose to different fields, especially where bad decisions can have very destructive results. We've already seen some of these threats manifest themselves in various ways, including biased algorithms, AI-based forgery and the spread of fake news during important events such as elections. The past few years have seen the development of a growing discussion around building trust in artificial intelligence and creating safeguards that prevent abuse and malicious behavior of AI models.


To cripple AI, hackers are turning data against itself

#artificialintelligence

A neural network looks at a picture of a turtle and sees a rifle. A self-driving car blows past a stop sign because a carefully crafted sticker bamboozled its computer vision. An eyeglass frame confuse facial recognition tech into thinking a random dude is actress Milla Jovovich. The hacking of artificial intelligence is an emerging security crisis. Pre-empting criminals attempting to hijack artificial intelligence by tampering with datasets or the physical environment, researchers have turned to adversarial machine learning.


5 Instances Where Neural Networks & Deep Learning Came Under Attack

#artificialintelligence

While the advancements in deep learning and neural network algorithm have brought some interesting innovations in the last few years, they have their own set of challenges. Researchers have now found that these advancements may pose security threats as it involves a large set of data that it is dealing with. While we have seen attacks such as IoT botnet attacks, phishing attacks or crypto-ransomware attacks in the past, neural networks can pose these threats too. This article largely talks about how deep learning and the neural network is on the verge of facing serious security threats. The most captivating and agitating task of deep learning is to enable machines to learn without human supervision.


Learning Securely

Communications of the ACM

Adversarial input can fool a machine-learning algorithm into misperceiving images. Over the past five years, machine learning has blossomed from a promising but immature technology into one that can achieve close to human-level performance on a wide array of tasks. In the near future, it is likely to be incorporated into an increasing number of technologies that directly impact society, from self-driving cars to virtual assistants to facial-recognition software. Yet machine learning also offers brand-new opportunities for hackers. Malicious inputs specially crafted by an adversary can "poison" a machine learning algorithm during its training period, or dupe it after it has been trained.


Researchers Tricked AI Into Doing Free Computations It Wasn't Trained to Do

#artificialintelligence

Facial recognition systems have become ruthlessly efficient at picking people out of a crowd in recent years, and people are finding ways to thwart the artificial intelligence that powers them. Research has already shown that AI can be fooled into seeing something that's not there, and now these algorithms can be hijacked and reprogrammed. Despite recent advances, the technology behind facial recognition, a type of deep learning called machine vision, leaves much to be desired. Many computer vision algorithms are still at a point where they're liable to make mistakes, such as mislabeling a turtle as a gun. These mistakes can be weaponized by subtly manipulating images so that they cause computers to "see" specific things--for example, a sticker on a sign can cause a self-driving car to think it's actually a stop sign.