David, Dimitri Percia
LLMs Perform Poorly at Concept Extraction in Cyber-security Research Literature
Würsch, Maxime, Kucharavy, Andrei, David, Dimitri Percia, Mermoud, Alain
Secure and reliable information systems have become a central requirement for the operational continuity of the vast majority of goods and services providers [42]. However, securing information systems in a fast-paced ecosystem of technological changes and innovations is hard [3]. New technologies in cybersecurity have short life cycles and constantly evolve [13]. This exposes information systems to attacks that exploit vulnerabilities and security gaps [3]. Hence, cybersecurity practitioners and researchers need to stay updated on the latest developments and trends to prevent incidents and increase resilience [14]. A common approach to gather cured and synthesized information about such developments is to apply bibliometrics-based knowledge entity extraction and comparison through embedding similarity [10, 50, 61] - recently boosted by the availability of entity extractors based on large language models (LLMs) [17, 46]. However, it is unclear how appropriate this approach is for the cybersecurity literature. We address this by emulating such an entity extraction and comparison pipeline, and by using a variety of common entity extractors - LLM-based and not -, and evaluating how relevant embeddings of extracted entities are to document understanding tasks - namely classification of arXiv documents as relevant to cybersecurity (https://arxiv.org). While LLMs burst into public attention in late 2022 - in large part thanks to public trials of conversationally fine-tuned LLMs [40, 4, 31]-, modern large language models pre-trained on large amounts of data trace their roots back to ELMo LLM, first released in 2018 [45].
Fundamentals of Generative Large Language Models and Perspectives in Cyber-Defense
Kucharavy, Andrei, Schillaci, Zachary, Maréchal, Loïc, Würsch, Maxime, Dolamic, Ljiljana, Sabonnadiere, Remi, David, Dimitri Percia, Mermoud, Alain, Lenders, Vincent
Generative Language Models gained significant attention in late 2022 / early 2023, notably with the introduction of models refined to act consistently with users' expectations of interactions with AI (conversational models). Arguably the focal point of public attention has been such a refinement of the GPT3 model -- the ChatGPT and its subsequent integration with auxiliary capabilities, including search as part of Microsoft Bing. Despite extensive prior research invested in their development, their performance and applicability to a range of daily tasks remained unclear and niche. However, their wider utilization without a requirement for technical expertise, made in large part possible through conversational fine-tuning, revealed the extent of their true capabilities in a real-world environment. This has garnered both public excitement for their potential applications and concerns about their capabilities and potential malicious uses. This review aims to provide a brief overview of the history, state of the art, and implications of Generative Language Models in terms of their principles, abilities, limitations, and future prospects -- especially in the context of cyber-defense, with a focus on the Swiss operational environment.
Beyond S-curves: Recurrent Neural Networks for Technology Forecasting
Glavackij, Alexander, David, Dimitri Percia, Mermoud, Alain, Romanou, Angelika, Aberer, Karl
Because of the considerable heterogeneity and complexity of the technological landscape, building accurate models to forecast is a challenging endeavor. Due to their high prevalence in many complex systems, S-curves are a popular forecasting approach in previous work. However, their forecasting performance has not been directly compared to other technology forecasting approaches. Additionally, recent developments in time series forecasting that claim to improve forecasting accuracy are yet to be applied to technological development data. This work addresses both research gaps by comparing the forecasting performance of S-curves to a baseline and by developing an autencoder approach that employs recent advances in machine learning and time series forecasting. S-curves forecasts largely exhibit a mean average percentage error (MAPE) comparable to a simple ARIMA baseline. However, for a minority of emerging technologies, the MAPE increases by two magnitudes. Our autoencoder approach improves the MAPE by 13.5% on average over the second-best result. It forecasts established technologies with the same accuracy as the other approaches. However, it is especially strong at forecasting emerging technologies with a mean MAPE 18% lower than the next best result. Our results imply that a simple ARIMA model is preferable over the S-curve for technology forecasting. Practitioners looking for more accurate forecasts should opt for the presented autoencoder approach.