Goto

Collaborating Authors

 Corsica


U.N. calls for probe after alleged drone attack on Gaza-bound aid flotilla

The Japan Times

U.N. calls for probe after alleged drone attack on Gaza-bound aid flotilla Activists wave Palestinian flags as they gather to support a flotilla carrying humanitarian aid in Ajaccio, on the French Mediterranean island of Corsica, on Sept 12. | AFP-JIJI Rome - The United Nations called Wednesday for an investigation into alleged drone attacks against a Gaza-bound aid flotilla that prompted Italy and Spain to send naval ships to help. The Global Sumud Flotilla, carrying activists including Swedish environmentalist Greta Thunberg, blamed Israel for more than a dozen explosions heard around its vessels off Greece late on Tuesday. U.N. Human Rights Office spokesperson Thameen Al-Kheetan said anyone responsible for the violations should be held accountable, and called for an independent, impartial and thorough investigation. In a time of both misinformation and too much information, quality journalism is more crucial than ever. By subscribing, you can help us get the story right. With your current subscription plan you can comment on stories.


Short-Term Forecasting of Energy Production and Consumption Using Extreme Learning Machine: A Comprehensive MIMO based ELM Approach

Voyant, Cyril, Despotovic, Milan, Garcia-Gutierrez, Luis, Asloune, Mohammed, Saint-Drenan, Yves-Marie, Duchaud, Jean-Laurent, Faggianelli, hjuvan Antone, Magliaro, Elena

arXiv.org Artificial Intelligence

A novel methodology for short-term energy forecasting using an Extreme Learning Machine ($\mathtt{ELM}$) is proposed. Using six years of hourly data collected in Corsica (France) from multiple energy sources (solar, wind, hydro, thermal, bioenergy, and imported electricity), our approach predicts both individual energy outputs and total production (including imports, which closely follow energy demand, modulo losses) through a Multi-Input Multi-Output ($\mathtt{MIMO}$) architecture. To address non-stationarity and seasonal variability, sliding window techniques and cyclic time encoding are incorporated, enabling dynamic adaptation to fluctuations. The $\mathtt{ELM}$ model significantly outperforms persistence-based forecasting, particularly for solar and thermal energy, achieving an $\mathtt{nRMSE}$ of $17.9\%$ and $5.1\%$, respectively, with $\mathtt{R^2} > 0.98$ (1-hour horizon). The model maintains high accuracy up to five hours ahead, beyond which renewable energy sources become increasingly volatile. While $\mathtt{MIMO}$ provides marginal gains over Single-Input Single-Output ($\mathtt{SISO}$) architectures and offers key advantages over deep learning methods such as $\mathtt{LSTM}$, it provides a closed-form solution with lower computational demands, making it well-suited for real-time applications, including online learning. Beyond predictive accuracy, the proposed methodology is adaptable to various contexts and datasets, as it can be tuned to local constraints such as resource availability, grid characteristics, and market structures.


NICE^k Metrics: Unified and Multidimensional Framework for Evaluating Deterministic Solar Forecasting Accuracy

Voyant, Cyril, Despotovic, Milan, Garcia-Gutierrez, Luis, Silva, Rodrigo Amaro e, Lauret, Philippe, Soubdhan, Ted, Bailek, Nadjem

arXiv.org Machine Learning

Accurate solar energy output prediction is key for integrating renewables into grids, maintaining stability, and improving energy management. However, standard error metrics such as Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and Skill Scores (SS) fail to capture the multidimensional nature of solar irradiance forecasting. These metrics lack sensitivity to forecastability, rely on arbitrary baselines (e.g., clear-sky models), and are poorly suited for operational use. To address this, we introduce the NICEk framework (Normalized Informed Comparison of Errors, with k = 1, 2, 3, Sigma), offering a robust and interpretable evaluation of forecasting models. Each NICEk score corresponds to an Lk norm: NICE1 targets average errors, NICE2 emphasizes large deviations, NICE3 highlights outliers, and NICESigma combines all. Using Monte Carlo simulations and data from 68 stations in the Spanish SIAR network, we evaluated methods including autoregressive models, extreme learning, and smart persistence. Theoretical and empirical results align when assumptions hold (e.g., R^2 ~ 1.0 for NICE2). Most importantly, NICESigma consistently shows higher discriminative power (p < 0.05), outperforming traditional metrics (p > 0.05). The NICEk metrics exhibit stronger statistical significance (e.g., p-values from 10^-6 to 0.004 across horizons) and greater generalizability. They offer a unified and operational alternative to standard error metrics in deterministic solar forecasting.


On the Importance of Clearsky Model in Short-Term Solar Radiation Forecasting

Voyant, Cyril, Despotovic, Milan, Notton, Gilles, Saint-Drenan, Yves-Marie, Asloune, Mohammed, Garcia-Gutierrez, Luis

arXiv.org Artificial Intelligence

Clearsky models are widely used in solar energy for many applications such as quality control, resource assessment, satellite-base irradiance estimation and forecasting. However, their use in forecasting and nowcasting is associated with a number of challenges. Synchronization errors, reliance on the Clearsky index (ratio of the global horizontal irradiance to its cloud-free counterpart) and high sensitivity of the clearsky model to errors in aerosol optical depth at low solar elevation limit their added value in real-time applications. This paper explores the feasibility of short-term forecasting without relying on a clearsky model. We propose a Clearsky-Free forecasting approach using Extreme Learning Machine (ELM) models. ELM learns daily periodicity and local variability directly from raw Global Horizontal Irradiance (GHI) data. It eliminates the need for Clearsky normalization, simplifying the forecasting process and improving scalability. Our approach is a non-linear adaptative statistical method that implicitely learns the irradiance in cloud-free conditions removing the need for an clear-sky model and the related operational issues. Deterministic and probabilistic results are compared to traditional benchmarks, including ARMA with McClear-generated Clearsky data and quantile regression for probabilistic forecasts. ELM matches or outperforms these methods, providing accurate predictions and robust uncertainty quantification. This approach offers a simple, efficient solution for real-time solar forecasting. By overcoming the stationarization process limitations based on usual multiplicative scheme Clearsky models, it provides a flexible and reliable framework for modern energy systems.


Classification problem in liability insurance using machine learning models: a comparative study

Qazvini, Marjan

arXiv.org Machine Learning

The insurance company uses different factors to classify the policyholders. In this study, we apply several machine learning models such as nearest neighbour and logistic regression to the Actuarial Challenge dataset used by Qazvini (2019) to classify liability insurance policies into two groups: 1 - policies with claims and 2 - policies without claims. The applications of Machine Learning (ML) models and Artificial Intelligence (AI) in areas such as medical diagnosis, economics, banking, fraud detection, agriculture, etc, have been known for quite a number of years. ML models have changed these industries remarkably. However, despite their high predictive power and their capability to identify nonlinear transformations and interactions between variables, they are slowly being introduced into the insurance industry and actuarial fields.


RisingBALLER: A player is a token, a match is a sentence, A path towards a foundational model for football players data analytics

Adjileye, Akedjou Achraff

arXiv.org Artificial Intelligence

In this paper, I introduce RisingBALLER, the first publicly available approach that leverages a transformer model trained on football match data to learn matchspecific player representations. Drawing inspiration from advances in language modeling, RisingBALLER treats each football match as a unique sequence in which players serve as tokens, with their embeddings shaped by the specific context of the match. Through the use of masked player prediction (MPP) as a pre-training task, RisingBALLER learns foundational features for football player representations, similar to how language models learn semantic features for text representations. As a downstream task, I introduce next match statistics prediction (NMSP) to showcase the effectiveness of the learned player embeddings. The NMSP model surpasses a strong baseline commonly used for performance forecasting within the community. Furthermore, I conduct an in-depth analysis to demonstrate how RisingBALLER's learned embeddings can be used in various football analytics tasks, such as producing meaningful positional features that capture the essence and variety of player roles beyond rigid x,y coordinates, team cohesion estimation, and similar player retrieval for more effective data-driven scouting. More than a simple machine learning model, RisingBALLER is a comprehensive framework designed to transform football data analytics by learning high-level foundational features for players, taking into account the context of each match. It offers a deeper understanding of football players beyond individual statistics. In recent years, the field of machine learning has been revolutionized by the introduction of the transformer architecture [1], which initially gained prominence in natural language processing (NLP) with models like BERT [2], RoBERTa [3], and more recently, the widespread use of large language models (LLMs). These models, often trained on seemingly simple tasks such as next token prediction or masked token prediction, have demonstrated remarkable performance in learning high-level features that effectively represent each word and model language intricately. They are capable of learning nuanced representations of the multiple meanings a word can have depending on its context.


Testing and Evaluation of Large Language Models: Correctness, Non-Toxicity, and Fairness

Wang, Wenxuan

arXiv.org Artificial Intelligence

Large language models (LLMs), such as ChatGPT, have rapidly penetrated into people's work and daily lives over the past few years, due to their extraordinary conversational skills and intelligence. ChatGPT has become the fastest-growing software in terms of user numbers in human history and become an important foundational model for the next generation of artificial intelligence applications. However, the generations of LLMs are not entirely reliable, often producing content with factual errors, biases, and toxicity. Given their vast number of users and wide range of application scenarios, these unreliable responses can lead to many serious negative impacts. This thesis introduces the exploratory works in the field of language model reliability during the PhD study, focusing on the correctness, non-toxicity, and fairness of LLMs from both software testing and natural language processing perspectives. First, to measure the correctness of LLMs, we introduce two testing frameworks, FactChecker and LogicAsker, to evaluate factual knowledge and logical reasoning accuracy, respectively. Second, for the non-toxicity of LLMs, we introduce two works for red-teaming LLMs. Third, to evaluate the fairness of LLMs, we introduce two evaluation frameworks, BiasAsker and XCulturalBench, to measure the social bias and cultural bias of LLMs, respectively.


Cooperative learning of Pl@ntNet's Artificial Intelligence algorithm: how does it work and how can we improve it?

Lefort, Tanguy, Affouard, Antoine, Charlier, Benjamin, Lombardo, Jean-Christophe, Chouet, Mathias, Goëau, Hervé, Salmon, Joseph, Bonnet, Pierre, Joly, Alexis

arXiv.org Artificial Intelligence

Deep learning models for plant species identification rely on large annotated datasets. The PlantNet system enables global data collection by allowing users to upload and annotate plant observations, leading to noisy labels due to diverse user skills. Achieving consensus is crucial for training, but the vast scale of collected data makes traditional label aggregation strategies challenging. Existing methods either retain all observations, resulting in noisy training data or selectively keep those with sufficient votes, discarding valuable information. Additionally, as many species are rarely observed, user expertise can not be evaluated as an inter-user agreement: otherwise, botanical experts would have a lower weight in the AI training step than the average user. Our proposed label aggregation strategy aims to cooperatively train plant identification AI models. This strategy estimates user expertise as a trust score per user based on their ability to identify plant species from crowdsourced data. The trust score is recursively estimated from correctly identified species given the current estimated labels. This interpretable score exploits botanical experts' knowledge and the heterogeneity of users. Subsequently, our strategy removes unreliable observations but retains those with limited trusted annotations, unlike other approaches. We evaluate PlantNet's strategy on a released large subset of the PlantNet database focused on European flora, comprising over 6M observations and 800K users. We demonstrate that estimating users' skills based on the diversity of their expertise enhances labeling performance. Our findings emphasize the synergy of human annotation and data filtering in improving AI performance for a refined dataset. We explore incorporating AI-based votes alongside human input. This can further enhance human-AI interactions to detect unreliable observations.


Know When To Stop: A Study of Semantic Drift in Text Generation

Spataru, Ava, Hambro, Eric, Voita, Elena, Cancedda, Nicola

arXiv.org Artificial Intelligence

In this work, we explicitly show that modern LLMs tend to generate correct facts first, then "drift away" and generate incorrect facts later: this was occasionally observed but never properly measured. We develop a semantic drift score that measures the degree of separation between correct and incorrect facts in generated texts and confirm our hypothesis when generating Wikipedia-style biographies. This correct-then-incorrect generation pattern suggests that factual accuracy can be improved by knowing when to stop generation. Therefore, we explore the trade-off between information quantity and factual accuracy for several early stopping methods and manage to improve factuality by a large margin. We further show that reranking with semantic similarity can further improve these results, both compared to the baseline and when combined with early stopping. Finally, we try calling external API to bring the model back to the right generation path, but do not get positive results. Overall, our methods generalize and can be applied to any long-form text generation to produce more reliable information, by balancing trade-offs between factual accuracy, information quantity and computational cost.


The Earth is Flat? Unveiling Factual Errors in Large Language Models

Wang, Wenxuan, Shi, Juluan, Tu, Zhaopeng, Yuan, Youliang, Huang, Jen-tse, Jiao, Wenxiang, Lyu, Michael R.

arXiv.org Artificial Intelligence

Large Language Models (LLMs) like ChatGPT are foundational in various applications due to their extensive knowledge from pre-training and fine-tuning. Despite this, they are prone to generating factual and commonsense errors, raising concerns in critical areas like healthcare, journalism, and education to mislead users. Current methods for evaluating LLMs' veracity are limited by test data leakage or the need for extensive human labor, hindering efficient and accurate error detection. To tackle this problem, we introduce a novel, automatic testing framework, FactChecker, aimed at uncovering factual inaccuracies in LLMs. This framework involves three main steps: First, it constructs a factual knowledge graph by retrieving fact triplets from a large-scale knowledge database. Then, leveraging the knowledge graph, FactChecker employs a rule-based approach to generates three types of questions (Yes-No, Multiple-Choice, and WH questions) that involve single-hop and multi-hop relations, along with correct answers. Lastly, it assesses the LLMs' responses for accuracy using tailored matching strategies for each question type. Our extensive tests on six prominent LLMs, including text-davinci-002, text-davinci-003, ChatGPT~(gpt-3.5-turbo, gpt-4), Vicuna, and LLaMA-2, reveal that FactChecker can trigger factual errors in up to 45\% of questions in these models. Moreover, we demonstrate that FactChecker's test cases can improve LLMs' factual accuracy through in-context learning and fine-tuning (e.g., llama-2-13b-chat's accuracy increase from 35.3\% to 68.5\%). We are making all code, data, and results available for future research endeavors.