Goto

Collaborating Authors

 androidmanifest


Android Malware Detection Based on RGB Images and Multi-feature Fusion

Wang, Zhiqiang, Yu, Qiulong, Yuan, Sicheng

arXiv.org Artificial Intelligence

With the widespread adoption of smartphones, Android malware has become a significant challenge in the field of mobile device security. Current Android malware detection methods often rely on feature engineering to construct dynamic or static features, which are then used for learning. However, static feature-based methods struggle to counter code obfuscation, packing, and signing techniques, while dynamic feature-based methods involve time-consuming feature extraction. Image-based methods for Android malware detection offer better resilience against malware variants and polymorphic malware. This paper proposes an end-to-end Android malware detection technique based on RGB images and multi-feature fusion. The approach involves extracting Dalvik Executable (DEX) files, AndroidManifest.xml files, and API calls from APK files, converting them into grayscale images, and enhancing their texture features using Canny edge detection, histogram equalization, and adaptive thresholding techniques. These grayscale images are then combined into an RGB image containing multi-feature fusion information, which is analyzed using mainstream image classification models for Android malware detection. Extensive experiments demonstrate that the proposed method effectively captures Android malware characteristics, achieving an accuracy of up to 97.25%, outperforming existing detection methods that rely solely on DEX files as classification features. Additionally, ablation experiments confirm the effectiveness of using the three key files for feature representation in the proposed approach.


Improving Android Malware Detection Through Data Augmentation Using Wasserstein Generative Adversarial Networks

Stalin, Kawana, Mekoya, Mikias Berhanu

arXiv.org Artificial Intelligence

Generative Adversarial Networks (GANs) have demonstrated their versatility across various applications, including data augmentation and malware detection. This research explores the effectiveness of utilizing GAN-generated data to train a model for the detection of Android malware. Given the considerable storage requirements of Android applications, the study proposes a method to synthetically represent data using GANs, thereby reducing storage demands. The proposed methodology involves creating image representations of features extracted from an existing dataset. A GAN model is then employed to generate a more extensive dataset consisting of realistic synthetic grayscale images. Subsequently, this synthetic dataset is utilized to train a Convolutional Neural Network (CNN) designed to identify previously unseen Android malware applications. The study includes a comparative analysis of the CNN's performance when trained on real images versus synthetic images generated by the GAN. Furthermore, the research explores variations in performance between the Wasserstein Generative Adversarial Network (WGAN) and the Deep Convolutional Generative Adversarial Network (DCGAN). The investigation extends to studying the impact of image size and malware obfuscation on the classification model's effectiveness. The data augmentation approach implemented in this study resulted in a notable performance enhancement of the classification model, ranging from 1.5% to 7%, depending on the dataset. The highest achieved F1 score reached 0.975. Keywords--Generative Adversarial Networks, Android Malware, Data Augmentation, Wasserstein Generative Adversarial Network


How to Add a Voice Assistant to your Mobile App?

#artificialintelligence

Don't you think that a great many mobile apps would be a lot more convenient if they had voice control? In most cases, voice navigation or a conversational form-filling is just enough. Through the use of the Habitica example (an open-source Kotlin-based habit tracking app) Vit Gorbachyov, Just AI solution architect, will show you how to add a voice interface into any app swiftly and seamlessly. Let's start with the obvious: It's really obvious, but most of the time voice is simply quicker. Consider this, ordering a ticket saying get me a plane to London for tomorrow for two instead of a long-time form filling.


Building an App for Eye Filters with PoseNet

#artificialintelligence

Pose estimation is a computer vision task for detecting the pose (i.e. It works by detecting a number of keypoints so that we can understand the main parts of the object and estimate its current orientation. Based on such keypoints, we will be able to form the shape of the object in either 2D or 3D. This tutorial covers how to build an Android app that estimates the human pose in standalone RGB images using the pretrained TFLite PoseNet model. The model predicts the locations of 17 keypoints of the human body, including the location of the eyes, nose, shoulders, etc.


ML Kit Android: Implementing Text Recognition -- Firebase

#artificialintelligence

Firebase is now set up, we can now start building our Text Recognition app. We need Firebase ML Vision dependency, we add it in our app-level build.grade After capturing the image from the camera, we'll set the image into the ImageView as: Our app is ready to use. Run the app and click on the camera icon to launch the camera on your Android Device. Click a picture of some text, then click on tick icon and watch Firebase do the magic for you.