In its latest annual report, filed with the Securities and Exchange Commission, tech giant Alphabet warned investors against the many challenges of artificial intelligence, following the lead of Microsoft, which issued similar warnings last August. Recent advances in deep learning and neural networks have created much hope about the possibilities that AI presents to various domains that were previously thought to be off the limits for computer software. But there's also concern about new threats AI will pose to different fields, especially where bad decisions can have very destructive results. We've already seen some of these threats manifest themselves in various ways, including biased algorithms, AI-based forgery and the spread of fake news during important events such as elections. The past few years have seen the development of a growing discussion around building trust in artificial intelligence and creating safeguards that prevent abuse and malicious behavior of AI models.
A neural network looks at a picture of a turtle and sees a rifle. A self-driving car blows past a stop sign because a carefully crafted sticker bamboozled its computer vision. An eyeglass frame confuse facial recognition tech into thinking a random dude is actress Milla Jovovich. The hacking of artificial intelligence is an emerging security crisis. Pre-empting criminals attempting to hijack artificial intelligence by tampering with datasets or the physical environment, researchers have turned to adversarial machine learning.
While the advancements in deep learning and neural network algorithm have brought some interesting innovations in the last few years, they have their own set of challenges. Researchers have now found that these advancements may pose security threats as it involves a large set of data that it is dealing with. While we have seen attacks such as IoT botnet attacks, phishing attacks or crypto-ransomware attacks in the past, neural networks can pose these threats too. This article largely talks about how deep learning and the neural network is on the verge of facing serious security threats. The most captivating and agitating task of deep learning is to enable machines to learn without human supervision.
Adapted from You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It's Making the World a Weirder Place, by Janelle Shane. Suppose you're running security at a cockroach farm. You've got advanced image recognition technology on all the cameras, ready to sound the alarm at the slightest sign of trouble. The day goes uneventfully until, reviewing the logs at the end of your shift, you notice that although the system has recorded zero instances of cockroaches escaping into the staff-only areas, it has recorded seven instances of giraffes. Thinking this a bit odd, perhaps, but not yet alarming, you decide to review the camera footage.
Adversarial input can fool a machine-learning algorithm into misperceiving images. Over the past five years, machine learning has blossomed from a promising but immature technology into one that can achieve close to human-level performance on a wide array of tasks. In the near future, it is likely to be incorporated into an increasing number of technologies that directly impact society, from self-driving cars to virtual assistants to facial-recognition software. Yet machine learning also offers brand-new opportunities for hackers. Malicious inputs specially crafted by an adversary can "poison" a machine learning algorithm during its training period, or dupe it after it has been trained.