Goto

Collaborating Authors

 yazdinejad


OpCode-Based Malware Classification Using Machine Learning and Deep Learning Techniques

Saini, Varij, Gupta, Rudraksh, Soni, Neel

arXiv.org Artificial Intelligence

This technical report presents a comprehensive analysis of malware classification using OpCode sequences. Two distinct approaches are evaluated: traditional machine learning using n-gram analysis with Support Vector Machine (SVM), K-Nearest Neighbors (KNN), and Decision Tree classifiers; and a deep learning approach employing a Convolutional Neural Network (CNN). The traditional machine learning approach establishes a baseline using handcrafted 1-gram and 2-gram features from disassembled malware samples. The deep learning methodology builds upon the work proposed in "Deep Android Malware Detection" by McLaughlin et al. and evaluates the performance of a CNN model trained to automatically extract features from raw OpCode data. Empirical results are compared using standard performance metrics (accuracy, precision, recall, and F1-score). While the SVM classifier outperforms other traditional techniques, the CNN model demonstrates competitive performance with the added benefit of automated feature extraction.


Evaluating the Usability of LLMs in Threat Intelligence Enrichment

Srikanth, Sanchana, Hasanuzzaman, Mohammad, Meem, Farah Tasnur

arXiv.org Artificial Intelligence

Large Language Models (LLMs) have the potential to significantly enhance threat intelligence by automating the collection, preprocessing, and analysis of threat data. However, the usability of these tools is critical to ensure their effective adoption by security professionals. Despite the advanced capabilities of LLMs, concerns about their reliability, accuracy, and potential for generating inaccurate information persist. This study conducts a comprehensive usability evaluation of five LLMs ChatGPT, Gemini, Cohere, Copilot, and Meta AI focusing on their user interface design, error handling, learning curve, performance, and integration with existing tools in threat intelligence enrichment. Utilizing a heuristic walkthrough and a user study methodology, we identify key usability issues and offer actionable recommendations for improvement. Our findings aim to bridge the gap between LLM functionality and user experience, thereby promoting more efficient and accurate threat intelligence practices by ensuring these tools are user-friendly and reliable.


Building an Effective Email Spam Classification Model with spaCy

Taghandiki, Kazem

arXiv.org Artificial Intelligence

Today, people use email services such as Gmail, Outlook, AOL Mail, etc. to communicate with each other as quickly as possible to send information and official letters. Spam or junk mail is a major challenge to this type of communication, usually sent by botnets with the aim of advertising, harming and stealing information in bulk to different people. Receiving unwanted spam emails on a daily basis fills up the inbox folder. Therefore, spam detection is a fundamental challenge, so far many works have been done to detect spam using clustering and text categorisation methods. In this article, the author has used the spaCy natural language processing library and 3 machine learning (ML) algorithms Naive Bayes (NB), Decision Tree C45 and Multilayer Perceptron (MLP) in the Python programming language to detect spam emails collected from the Gmail service. Observations show the accuracy rate (96%) of the Multilayer Perceptron (MLP) algorithm in spam detection.


Automatic summarisation of Instagram social network posts Combining semantic and statistical approaches

Taghandiki, Kazem, Ahmadi, Mohammad Hassan, Ehsan, Elnaz Rezaei

arXiv.org Artificial Intelligence

The proliferation of data and text documents such as articles, web pages, books, social network posts, etc. on the Internet has created a fundamental challenge in various fields of text processing under the title of "automatic text summarisation". Manual processing and summarisation of large volumes of textual data is a very difficult, expensive, time-consuming and impossible process for human users. Text summarisation systems are divided into extractive and abstract categories. In the extractive summarisation method, the final summary of a text document is extracted from the important sentences of the same document without any modification. In this method, it is possible to repeat a series of sentences and to interfere with pronouns. However, in the abstract summarisation method, the final summary of a textual document is extracted from the meaning and significance of the sentences and words of the same document or other documents. Many of the works carried out have used extraction methods or abstracts to summarise the collection of web documents, each of which has advantages and disadvantages in the results obtained in terms of similarity or size. In this work, a crawler has been developed to extract popular text posts from the Instagram social network with appropriate preprocessing, and a set of extraction and abstraction algorithms have been combined to show how each of the abstraction algorithms can be used. Observations made on 820 popular text posts on the social network Instagram show the accuracy (80%) of the proposed system.


Types of Approaches, Applications and Challenges in the Development of Sentiment Analysis Systems

Taghandiki, Kazem, Ehsan, Elnaz Rezaei

arXiv.org Artificial Intelligence

Today, the web has become a mandatory platform to express users' opinions, emotions and feelings about various events. Every person using his smartphone can give his opinion about the purchase of a product, the occurrence of an accident, the occurrence of a new disease, etc. in blogs and social networks such as (Twitter, WhatsApp, Telegram and Instagram) register. Therefore, millions of comments are recorded daily and it creates a huge volume of unstructured text data that can extract useful knowledge from this type of data by using natural language processing methods. Sentiment analysis is one of the important applications of natural language processing and machine learning, which allows us to analyze the sentiments of comments and other textual information recorded by web users. Therefore, the analysis of sentiments, approaches and challenges in this field will be explained in the following.


Implementation of a noisy hyperlink removal system: A semantic and relatedness approach

Taghandiki, Kazem, Ehsan, Elnaz Rezaei

arXiv.org Artificial Intelligence

As the volume of data on the web grows, the web structure graph, which is a graph representation of the web, continues to evolve. The structure of this graph has gradually shifted from content-based to non-content-based. Furthermore, spam data, such as noisy hyperlinks, in the web structure graph adversely affect the speed and efficiency of information retrieval and link mining algorithms. Previous works in this area have focused on removing noisy hyperlinks using structural and string approaches. However, these approaches may incorrectly remove useful links or be unable to detect noisy hyperlinks in certain circumstances. In this paper, a data collection of hyperlinks is initially constructed using an interactive crawler. The semantic and relatedness structure of the hyperlinks is then studied through semantic web approaches and tools such as the DBpedia ontology. Finally, the removal process of noisy hyperlinks is carried out using a reasoner on the DBpedia ontology. Our experiments demonstrate the accuracy and ability of semantic web technologies to remove noisy hyperlinks


A systematic literature review on Robotic Process Automation security

Gajjar, Nishith, Rathod, Keyur, Jani, Khushali

arXiv.org Artificial Intelligence

The technocrat epoch is overflowing with new technologies and such cutting-edge facilities accompany the risks and pitfalls. Robotic process automation is another innovation that empowers the computerization of high-volume, manual, repeatable, everyday practice, rule-based, and unmotivating human errands. The principal objective of Robotic Process Automation is to supplant monotonous human errands with a virtual labor force or a computerized specialist playing out a similar work as the human laborer used to perform. This permits human laborers to zero in on troublesome undertakings and critical thinking. Robotic Process Automation instruments are viewed as straightforward and strong for explicit business process computerization. Robotic Process Automation comprises intelligence to decide if a process should occur. It has the capability to analyze the data presented and provide a decision based on the logic parameters set in place by the developer. Moreover, it does not demand for system integration, like other forms of automation. Be that as it may since the innovation is yet arising, the Robotic Process Automation faces a few difficulties during the execution.