Goto

Collaborating Authors

 automatique


Multilingual corpora for the study of new concepts in the social sciences and humanities:

Kyriakoglou, Revekka, Pappa, Anna

arXiv.org Artificial Intelligence

This article presents a hybrid methodology for building a multilingual corpus designed to support the study of emerging concepts in the humanities and social sciences (HSS), illustrated here through the case of ``non-technological innovation''. The corpus relies on two complementary sources: (1) textual content automatically extracted from company websites, cleaned for French and English, and (2) annual reports collected and automatically filtered according to documentary criteria (year, format, duplication). The processing pipeline includes automatic language detection, filtering of non-relevant content, extraction of relevant segments, and enrichment with structural metadata. From this initial corpus, a derived dataset in English is created for machine learning purposes. For each occurrence of a term from the expert lexicon, a contextual block of five sentences is extracted (two preceding and two following the sentence containing the term). Each occurrence is annotated with the thematic category associated with the term, enabling the construction of data suitable for supervised classification tasks. This approach results in a reproducible and extensible resource, suitable both for analyzing lexical variability around emerging concepts and for generating datasets dedicated to natural language processing applications.


Search-Optimized Quantization in Biomedical Ontology Alignment

Bouaggad, Oussama, Grabar, Natalia

arXiv.org Artificial Intelligence

In the fast-moving world of AI, as organizations and researchers develop more advanced models, they face challenges due to their sheer size and computational demands. Deploying such models on edge devices or in resource-constrained environments adds further challenges related to energy consumption, memory usage and latency. To address these challenges, emerging trends are shaping the future of efficient model optimization techniques. From this premise, by employing supervised state-of-the-art transformer-based models, this research introduces a systematic method for ontology alignment, grounded in cosine-based semantic similarity between a biomedical layman vocabulary and the Unified Medical Language System (UMLS) Metathesaurus. It leverages Microsoft Olive to search for target optimizations among different Execution Providers (EPs) using the ONNX Runtime backend, followed by an assembled process of dynamic quantization employing Intel Neural Compressor and IPEX (Intel Extension for PyTorch). Through our optimization process, we conduct extensive assessments on the two tasks from the DEFT 2020 Evaluation Campaign, achieving a new state-of-the-art in both. We retain performance metrics intact, while attaining an average inference speed-up of 20x and reducing memory usage by approximately 70%.


AllSummedUp: un framework open-source pour comparer les metriques d'evaluation de resume

Herserant, Tanguy, Guigue, Vincent

arXiv.org Artificial Intelligence

This paper investigates reproducibility challenges in automatic text summarization evaluation. Based on experiments conducted across six representative metrics ranging from classical approaches like ROUGE to recent LLM-based methods (G-Eval, SEval-Ex), we highlight significant discrepancies between reported performances in the literature and those observed in our experimental setting. We introduce a unified, open-source framework, applied to the SummEval dataset and designed to support fair and transparent comparison of evaluation metrics. Our results reveal a structural trade-off: metrics with the highest alignment with human judgments tend to be computationally intensive and less stable across runs. Beyond comparative analysis, this study highlights key concerns about relying on LLMs for evaluation, stressing their randomness, technical dependencies, and limited reproducibility. We advocate for more robust evaluation protocols including exhaustive documentation and methodological standardization to ensure greater reliability in automatic summarization assessment.


D\'eveloppement automatique de lexiques pour les concepts \'emergents : une exploration m\'ethodologique

Kyriakoglou, Revekka, Pappa, Anna, He, Jilin, Schoen, Antoine, Laurens, Patricia, Vartampetian, Markarit, Laredo, Philippe, Kyriacopoulou, Tita

arXiv.org Artificial Intelligence

This paper presents the development of a lexicon centered on emerging concepts, focusing on non-technological innovation. It introduces a four-step methodology that combines human expertise, statistical analysis, and machine learning techniques to establish a model that can be generalized across multiple domains. This process includes the creation of a thematic corpus, the development of a Gold Standard Lexicon, annotation and preparation of a training corpus, and finally, the implementation of learning models to identify new terms. The results demonstrate the robustness and relevance of our approach, highlighting its adaptability to various contexts and its contribution to lexical research. The developed methodology promises applicability in conceptual fields.


When Abel Kills Cain: What Machine Translation Cannot Capture

Bénel, Aurélien, Falip, Joris, Lacour, Philippe

arXiv.org Artificial Intelligence

The article aims at identifying what, from a structural point of view, AI based automatic translators cannot fully capture. It focuses on the machine's mistakes, in order to try to explain its causes. The biblical story of Ca\"in and Abel has been chosen because of its rich interpretive and critical tradition, but also because of its semantic difficulty. The investigation begins with the observation, for the translation of this text, of the language pairs and interfaces offered by the best known machine translation services (Google Translate, DeepL). A typology of the most frequent translation errors is then established. Finally, contemporary translations are compared, in order to underline the unique contribution of each. In conclusion, the article suggests a revision of translation theory and, corArtificial Intelligence, Translation, Limitations, Interpretation, Comparison, Unicityelatively, a reformulation of its technology concerning cultural texts.


Preuve de concept d'un bot vocal dialoguant en wolof

Gauthier, Elodie, Wade, Papa-Séga, Moudenc, Thierry, Collen, Patrice, De Neef, Emilie, Ba, Oumar, Cama, Ndeye Khoyane, Kebe, Cheikh Ahmadou Bamba, Gningue, Ndeye Aissatou, Aristide, Thomas Mendo'o

arXiv.org Artificial Intelligence

This paper presents the proof-of-concept of the first automatic voice assistant ever built in Wolof language, the main vehicular language spoken in Senegal. This voicebot is the result of a collaborative research project between Orange Innovation in France, Orange Senegal (aka Sonatel) and ADNCorp, a small IT company based in Dakar, Senegal. The purpose of the voicebot is to provide information to Orange customers about the Sargal loyalty program of Orange Senegal by using the most natural mean to communicate: speech. The voicebot receives in input the customer's oral request that is then processed by a SLU system to reply to the customer's request using audio recordings. The first results of this proof-of-concept are encouraging as we achieved 22\% of WER for the ASR task and 78\% of F1-score on the NLU task.


Realistic simulation of users for IT systems in cyber ranges

Dey, Alexandre, Costé, Benjamin, Totel, Éric, Bécue, Adrien

arXiv.org Artificial Intelligence

Generating user activity is a key capability for both evaluating security monitoring tools as well as improving the credibility of attacker analysis platforms (e.g., honeynets). In this paper, to generate this activity, we instrument each machine by means of an external agent. This agent combines both deterministic and deep learning based methods to adapt to different environment (e.g., multiple OS, software versions, etc.), while maintaining high performances. We also propose conditional text generation models to facilitate the creation of conversations and documents to accelerate the definition of coherent, system-wide, life scenarios.


Automatic detection of surgical site infections from a clinical data warehouse

Quéroué, Marine, Lashéras-Bauduin, Agnès, Jouhet, Vianney, Thiessard, Frantz, Vital, Jean-Marc, Rogues, Anne-Marie, Cossin, Sébastien

arXiv.org Machine Learning

Reducing the incidence of surgical site infections (SSIs) is one of the objectives of the French nosocomial infection control program. Manual monitoring of SSIs is carried out each year by the hospital hygiene team and surgeons at the University Hospital of Bordeaux. Our goal was to develop an automatic detection algorithm based on hospital information system data. Three years (2015, 2016 and 2017) of manual spine surgery monitoring have been used as a gold standard to extract features and train machine learning algorithms. The dataset contained 22 SSIs out of 2133 spine surgeries. Two different approaches were compared. The first used several data sources and achieved the best performance but is difficult to generalize to other institutions. The second was based on free text only with semiautomatic extraction of discriminant terms. The algorithms managed to identify all the SSIs with 20 and 26 false positives respectively on the dataset. Another evaluation is underway. These results are encouraging for the development of semi-automated surveillance methods.


Qwant Research @DEFT 2019: Document matching and information retrieval using clinical cases

Maudet, Estelle, Cattan, Oralie, de Seyssel, Maureen, Servan, Christophe

arXiv.org Machine Learning

Task 2 is a task on semantic similarity between clinical cases and discussions. For this task, we propose an approach based on language models and evaluate the impact on the results of different preprocessings and matching techniques. For task 3, we have developed an information extraction system yielding very encouraging results accuracy-wise. We have experimented two different approaches, one based on the exclusive use of neural networks, the other based on a linguistic analysis.


Can everyday AI be ethical. Fairness of Machine Learning Algorithms

Besse, Philippe, Castets-Renard, Celine, Garivier, Aurelien, Loubes, Jean-Michel

arXiv.org Artificial Intelligence

Combining big data and machine learning algorithms, the power of automatic decision tools induces as much hope as fear. Many recently enacted European legislation (GDPR) and French laws attempt to regulate the use of these tools. Leaving aside the well-identified problems of data confidentiality and impediments to competition, we focus on the risks of discrimination, the problems of transparency and the quality of algorithmic decisions. The detailed perspective of the legal texts, faced with the complexity and opacity of the learning algorithms, reveals the need for important technological disruptions for the detection or reduction of the discrimination risk, and for addressing the right to obtain an explanation of the auto- matic decision. Since trust of the developers and above all of the users (citizens, litigants, customers) is essential, algorithms exploiting personal data must be deployed in a strict ethical framework. In conclusion, to answer this need, we list some ways of controls to be developed: institutional control, ethical charter, external audit attached to the issue of a label.