Goto

Collaborating Authors

EvilModel: Hiding Malware Inside of Neural Network Models

arXiv.org Artificial Intelligence

Delivering malware covertly and evasively is critical to advanced malware campaigns. In this paper, we present a new method to covertly and evasively deliver malware through a neural network model. Neural network models are poorly explainable and have a good generalization ability. By embedding malware in neurons, the malware can be delivered covertly, with minor or no impact on the performance of neural network. Meanwhile, because the structure of the neural network model remains unchanged, it can pass the security scan of antivirus engines. Experiments show that 36.9MB of malware can be embedded in a 178MB-AlexNet model within 1% accuracy loss, and no suspicion is raised by anti-virus engines in VirusTotal, which verifies the feasibility of this method. With the widespread application of artificial intelligence, utilizing neural networks for attacks becomes a forwarding trend. We hope this work can provide a reference scenario for the defense on neural network-assisted attacks.


The Looming Rise of AI-Powered Malware

#artificialintelligence

In the past two years, we've learned that machine learning algorithms can manipulate public opinion, cause fatal car crashes, create fake porn, and manifest extremely sexist and racist behavior. And now, the cybersecurity threats of deep learning and neural networks are emerging. We're just beginning to catch glimpses of a future in which cybercriminals trick neural networks into making fatal mistakes and use deep learning to hide their malware and find their target among millions of users. Part of the challenge of securing artificial intelligence applications lies in the fact it's hard to explain how they work, and even the people who create them are often hard-pressed to make sense of their inner workings. But unless we prepare ourselves for what is to come, we'll learn to appreciate and react to these threats the hard way.


How to confuse antimalware neural networks. Adversarial attacks and protection

#artificialintelligence

Nowadays, cybersecurity companies implement a variety of methods to discover new, previously unknown malware files. Machine learning (ML) is a powerful and widely used approach for this task. At Kaspersky we have a number of complex ML models based on different file features, including models for static and dynamic detection, for processing sandbox logs and system events, etc. We implement different machine learning techniques, including deep neural networks, one of the most promising technologies that make it possible to work with large amounts of data, incorporate different types of features, and boast a high accuracy rate. But can we rely entirely on machine learning approaches in the battle with the bad guys? Or could powerful AI itself be vulnerable? In this article we attempt to attack our product anti-malware neural network models and check existing defense methods. An adversarial attack is a method of making small modifications to the objects in such a way that the machine learning model begins to misclassify them.


Researchers demonstrate that malware can be hidden inside AI models

#artificialintelligence

Researchers Zhi Wang, Chaoge Liu, and Xiang Cui published a paper last Monday demonstrating a new technique for slipping malware past automated detection tools--in this case, by hiding it inside a neural network. The three embedded 36.9MiB of malware into a 178MiB AlexNet model without significantly altering the function of the model itself. The malware-embedded model classified images with near-identical accuracy, within 1% of the malware-free model. Just as importantly, squirreling the malware away into the model broke it up in ways that prevented detection by standard antivirus engines. VirusTotal, a service that "inspects items with over 70 antivirus scanners and URL/domain blocklisting services, in addition to a myriad of tools to extract signals from the studied content," did not raise any suspicions about the malware-embedded model.


MalBERT: Using Transformers for Cybersecurity and Malicious Software Detection

arXiv.org Artificial Intelligence

In recent years we have witnessed an increase in cyber threats and malicious software attacks on different platforms with important consequences to persons and businesses. It has become critical to find automated machine learning techniques to proactively defend against malware. Transformers, a category of attention-based deep learning techniques, have recently shown impressive results in solving different tasks mainly related to the field of Natural Language Processing (NLP). In this paper, we propose the use of a Transformers' architecture to automatically detect malicious software. We propose a model based on BERT (Bidirectional Encoder Representations from Transformers) which performs a static analysis on the source code of Android applications using preprocessed features to characterize existing malware and classify it into different representative malware categories. The obtained results are promising and show the high performance obtained by Transformer-based models for malicious software detection.