Goto

Collaborating Authors

 Lichouri, Mohamed


dzFinNlp at AraFinNLP: Improving Intent Detection in Financial Conversational Agents

arXiv.org Artificial Intelligence

Memory (LSTM) networks (Firdaus et al., 2021) The Arabic Financial NLP (AraFinNLP) shared and their bidirectional variants (BiLSTM) (Sreelakshmi task highlights the increasing importance of advanced et al., 2018), has provided more nuanced Natural Language Processing (NLP) tools understanding by capturing the sequential nature tailored for the financial sector in the Arab world. of text. More recently, transformer-based models, This initiative is particularly timely given the substantial like BERT (Alshahrani et al., 2022), have set new growth of Middle Eastern stock markets, benchmarks in NLP by leveraging self-attention driven by diverse sectors across the region. This mechanisms to understand contextual relationships economic expansion underscores the need for sophisticated within text, making them particularly effective for financial NLP systems capable of handling complex tasks like intent detection across varied the unique linguistic and cultural nuances of dialects.


dzStance at StanceEval2024: Arabic Stance Detection based on Sentence Transformers

arXiv.org Artificial Intelligence

This study compares Term Frequency-Inverse Document Frequency (TF-IDF) features with Sentence Transformers for detecting writers' stances--favorable, opposing, or neutral--towards three significant topics: COVID-19 vaccine, digital transformation, and women empowerment. Through empirical evaluation, we demonstrate that Sentence Transformers outperform TF-IDF features across various experimental setups. Our team, dzStance, participated in a stance detection competition, achieving the 13th position (74.91%) among 15 teams in Women Empowerment, 10th (73.43%) in COVID Vaccine, and 12th (66.97%) in Digital Transformation. Overall, our team's performance ranked 13th (71.77%) among all participants. Notably, our approach achieved promising F1-scores, highlighting its effectiveness in identifying writers' stances on diverse topics. These results underscore the potential of Sentence Transformers to enhance stance detection models for addressing critical societal issues.


dzNLP at NADI 2024 Shared Task: Multi-Classifier Ensemble with Weighted Voting and TF-IDF Features

arXiv.org Artificial Intelligence

This paper presents the contribution of our dzNLP team to the NADI 2024 shared task, specifically in Subtask 1 - Multi-label Country-level Dialect Identification (MLDID) (Closed Track). We explored various configurations to address the challenge: in Experiment 1, we utilized a union of n-gram analyzers (word, character, character with word boundaries) with different n-gram values; in Experiment 2, we combined a weighted union of Term Frequency-Inverse Document Frequency (TF-IDF) features with various weights; and in Experiment 3, we implemented a weighted major voting scheme using three classifiers: Linear Support Vector Classifier (LSVC), Random Forest (RF), and K-Nearest Neighbors (KNN). Our approach, despite its simplicity and reliance on traditional machine learning techniques, demonstrated competitive performance in terms of F1-score and precision. Notably, we achieved the highest precision score of 63.22% among the participating teams. However, our overall F1 score was approximately 21%, significantly impacted by a low recall rate of 12.87%. This indicates that while our models were highly precise, they struggled to recall a broad range of dialect labels, highlighting a critical area for improvement in handling diverse dialectal variations.


USTHB at NADI 2023 shared task: Exploring Preprocessing and Feature Engineering Strategies for Arabic Dialect Identification

arXiv.org Artificial Intelligence

In this paper, we conduct an in-depth analysis of several key factors influencing the performance of Arabic Dialect Identification NADI'2023, with a specific focus on the first subtask involving country-level dialect identification. Our investigation encompasses the effects of surface preprocessing, morphological preprocessing, FastText vector model, and the weighted concatenation of TF-IDF features. For classification purposes, we employ the Linear Support Vector Classification (LSVC) model. During the evaluation phase, our system demonstrates noteworthy results, achieving an F1 score of 62.51%. This achievement closely aligns with the average F1 scores attained by other systems submitted for the first subtask, which stands at 72.91%.