Goto

Collaborating Authors

 Navarre


Balancing Tails when Comparing Distributions: Comprehensive Equity Index (CEI) with Application to Bias Evaluation in Operational Face Biometrics

Solano, Imanol, Fierrez, Julian, Morales, Aythami, Peña, Alejandro, Tolosana, Ruben, Zamora-Martinez, Francisco, Agustin, Javier San

arXiv.org Artificial Intelligence

Demographic bias in high-performance face recognition (FR) systems often eludes detection by existing metrics, especially with respect to subtle disparities in the tails of the score distribution. We introduce the Comprehensive Equity Index (CEI), a novel metric designed to address this limitation. CEI uniquely analyzes genuine and impostor score distributions separately, enabling a configurable focus on tail probabilities while also considering overall distribution shapes. Our extensive experiments (evaluating state-of-the-art FR systems, intentionally biased models, and diverse datasets) confirm CEI's superior ability to detect nuanced biases where previous methods fall short. Furthermore, we present CEI^A, an automated version of the metric that enhances objectivity and simplifies practical application. CEI provides a robust and sensitive tool for operational FR fairness assessment. The proposed methods have been developed particularly for bias evaluation in face biometrics but, in general, they are applicable for comparing statistical distributions in any problem where one is interested in analyzing the distribution tails.


LLM-Driven Self-Refinement for Embodied Drone Task Planning

Zhang, Deyu, Zhang, Xicheng, Li, Jiahao, Long, Tingting, Dai, Xunhua, Fu, Yongjian, Zhang, Jinrui, Ren, Ju, Zhang, Yaoxue

arXiv.org Artificial Intelligence

We introduce SRDrone, a novel system designed for self-refinement task planning in industrial-grade embodied drones. SRDrone incorporates two key technical contributions: First, it employs a continuous state evaluation methodology to robustly and accurately determine task outcomes and provide explanatory feedback. This approach supersedes conventional reliance on single-frame final-state assessment for continuous, dynamic drone operations. Second, SRDrone implements a hierarchical Behavior Tree (BT) modification model. This model integrates multi-level BT plan analysis with a constrained strategy space to enable structured reflective learning from experience. Experimental results demonstrate that SRDrone achieves a 44.87% improvement in Success Rate (SR) over baseline methods. Furthermore, real-world deployment utilizing an experience base optimized through iterative self-refinement attains a 96.25% SR. By embedding adaptive task refinement capabilities within an industrial-grade BT planning framework, SRDrone effectively integrates the general reasoning intelligence of Large Language Models (LLMs) with the stringent physical execution constraints inherent to embodied drones. Code is available at https://github.com/ZXiiiC/SRDrone.


Decentralized Federated Learning of Probabilistic Generative Classifiers

Pérez, Aritz, Echegoyen, Carlos, Santafé, Guzmán

arXiv.org Artificial Intelligence

--Federated learning is a paradigm of increasing relevance in real world applications, aimed at building a global model across a network of heterogeneous users without requiring the sharing of private data. We focus on model learning over decentralized architectures, where users collaborate directly to update the global model without relying on a central server . In this context, the current paper proposes a novel approach to collaboratively learn probabilistic generative classifiers with a parametric form. The framework is composed by a communication network over a set of local nodes, each of one having its own local data, and a local updating rule. The proposal involves sharing local statistics with neighboring nodes, where each node aggregates the neighbors' information and iteratively learns its own local classifier, which progressively converges to a global model. Extensive experiments demonstrate that the algorithm consistently converges to a globally competitive model across a wide range of network topologies, network sizes, local dataset sizes, and extreme non-i.i.d. In recent years, federated learning (FL) [1], [2] has gained increasing attention from both the research community [3], [4] and private companies [5], [6], as it enables the development of machine learning models across multiple users without requiring data centralization. This design inherently offers a fundamental layer of privacy while reducing the costs associated with massive data storage. FL traditionally achieves this by using a user-server architecture, where users train local models and share updates with a central server that aggregates them to build a global model [7], [8]. In contrast, decentralized FL [4], [9], [10] eliminates the need for a central server by enabling users to communicate directly and collaboratively train machine learning models.


Ghost Policies: A New Paradigm for Understanding and Learning from Failure in Deep Reinforcement Learning

Olaz, Xabier

arXiv.org Artificial Intelligence

Deep Reinforcement Learning (DRL) agents often exhibit intricate failure modes that are difficult to understand, debug, and learn from. This opacity hinders their reliable deployment in real-world applications. To address this critical gap, we introduce ``Ghost Policies,'' a concept materialized through Arvolution, a novel Augmented Reality (AR) framework. Arvolution renders an agent's historical failed policy trajectories as semi-transparent ``ghosts'' that coexist spatially and temporally with the active agent, enabling an intuitive visualization of policy divergence. Arvolution uniquely integrates: (1) AR visualization of ghost policies, (2) a behavioural taxonomy of DRL maladaptation, (3) a protocol for systematic human disruption to scientifically study failure, and (4) a dual-learning loop where both humans and agents learn from these visualized failures. We propose a paradigm shift, transforming DRL agent failures from opaque, costly errors into invaluable, actionable learning resources, laying the groundwork for a new research field: ``Failure Visualization Learning.''


Adaptive Bayesian Very Short-Term Wind Power Forecasting Based on the Generalised Logit Transformation

Shen, Tao, Browell, Jethro, Castro-Camilo, Daniela

arXiv.org Artificial Intelligence

Wind power plays an increasingly significant role in achieving the 2050 Net Zero Strategy. Despite its rapid growth, its inherent variability presents challenges in forecasting. Accurately forecasting wind power generation is one key demand for the stable and controllable integration of renewable energy into existing grid operations. This paper proposes an adaptive method for very short-term forecasting that combines the generalised logit transformation with a Bayesian approach. The generalised logit transformation processes double-bounded wind power data to an unbounded domain, facilitating the application of Bayesian methods. A novel adaptive mechanism for updating the transformation shape parameter is introduced to leverage Bayesian updates by recovering a small sample of representative data. Four adaptive forecasting methods are investigated, evaluating their advantages and limitations through an extensive case study of over 100 wind farms ranging four years in the UK. The methods are evaluated using the Continuous Ranked Probability Score and we propose the use of functional reliability diagrams to assess calibration. Results indicate that the proposed Bayesian method with adaptive shape parameter updating outperforms benchmarks, yielding consistent improvements in CRPS and forecast reliability. The method effectively addresses uncertainty, ensuring robust and accurate probabilistic forecasting which is essential for grid integration and decision-making.


Concept Map Assessment Through Structure Classification

Vossen, Laís P. V., Gasparini, Isabela, Oliveira, Elaine H. T., Czinczel, Berrit, Harms, Ute, Menzel, Lukas, Gombert, Sebastian, Neumann, Knut, Drachsler, Hendrik

arXiv.org Artificial Intelligence

Due to their versatility, concept maps are used in various educational settings and serve as tools that enable educators to comprehend students' knowledge construction. An essential component for analyzing a concept map is its structure, which can be categorized into three distinct types: spoke, network, and chain. Understanding the predominant structure in a map offers insights into the student's depth of comprehension of the subject. Therefore, this study examined 317 distinct concept map structures, classifying them into one of the three types, and used statistical and descriptive information from the maps to train multiclass classification models. As a result, we achieved an 86\% accuracy in classification using a Decision Tree. This promising outcome can be employed in concept map assessment systems to provide real-time feedback to the student.


PLS-based approach for fair representation learning

De-Diego, Elena M., Perez-Suay, Adrián, Gordaliza, Paula, Loubes, Jean-Michel

arXiv.org Machine Learning

We revisit the problem of fair representation learning by proposing Fair Partial Least Squares (PLS) components. PLS is widely used in statistics to efficiently reduce the dimension of the data by providing representation tailored for the prediction. We propose a novel method to incorporate fairness constraints in the construction of PLS components. This new algorithm provides a feasible way to construct such features both in the linear and the non linear case using kernel embeddings. The efficiency of our method is evaluated on different datasets, and we prove its superiority with respect to standard fair PCA method.


Divergent Emotional Patterns in Disinformation on Social Media? An Analysis of Tweets and TikToks about the DANA in Valencia

Arcos, Iván, Rosso, Paolo, Salaverría, Ramón

arXiv.org Artificial Intelligence

This study investigates the dissemination of disinformation on social media platforms during the DANA event (DANA is a Spanish acronym for Depresion Aislada en Niveles Altos, translating to high-altitude isolated depression) that resulted in extremely heavy rainfall and devastating floods in Valencia, Spain, on October 29, 2024. We created a novel dataset of 650 TikTok and X posts, which was manually annotated to differentiate between disinformation and trustworthy content. Additionally, a Few-Shot annotation approach with GPT-4o achieved substantial agreement (Cohen's kappa of 0.684) with manual labels. Emotion analysis revealed that disinformation on X is mainly associated with increased sadness and fear, while on TikTok, it correlates with higher levels of anger and disgust. Linguistic analysis using the LIWC dictionary showed that trustworthy content utilizes more articulate and factual language, whereas disinformation employs negations, perceptual words, and personal anecdotes to appear credible. Audio analysis of TikTok posts highlighted distinct patterns: trustworthy audios featured brighter tones and robotic or monotone narration, promoting clarity and credibility, while disinformation audios leveraged tonal variation, emotional depth, and manipulative musical elements to amplify engagement. In detection models, SVM+TF-IDF achieved the highest F1-Score, excelling with limited data. Incorporating audio features into roberta-large-bne improved both Accuracy and F1-Score, surpassing its text-only counterpart and SVM in Accuracy. GPT-4o Few-Shot also performed well, showcasing the potential of large language models for automated disinformation detection. These findings demonstrate the importance of leveraging both textual and audio features for improved disinformation detection on multimodal platforms like TikTok.


REX: Causal Discovery based on Machine Learning and Explainability techniques

Renero, Jesus, Ochoa, Idoia, Maestre, Roberto

arXiv.org Artificial Intelligence

Causal discovery --the process of identifying cause-and-effect relationships from observational data-- is a pivotal challenge in artificial intelligence (AI) and machine learning. Unveiling causal structures enables robust predictions, facilitates counterfactual reasoning, and enhances decision-making processes in complex systems [1]. Traditional methods for causal discovery often rely on statistical tests for independence and structural equation modeling, which may not scale efficiently with high-dimensional data or effectively capture intricate non-linear relationships [2, 3]. In recent years, machine learning models, particularly deep learning architectures, have achieved remarkable success in predictive tasks. However, these models are typically considered "black boxes" due to their lack of interpretability. This opacity has led to a growing interest in explainable AI (XAI) techniques, with Shapley values emerging as a prominent method for interpreting model predictions [4]. Shapley values, grounded in cooperative game theory, provide a principled approach to attributing the contribution of each feature to the output of a model by quantifying the average marginal contribution of a feature across all possible subsets of features [5]. While Shapley values offer valuable insights into feature importance within a model's predictive framework, the link between feature importance and causal influence is non-trivial.


Clinical Evaluation of Medical Image Synthesis: A Case Study in Wireless Capsule Endoscopy

Gatoula, Panagiota, Diamantis, Dimitrios E., Koulaouzidis, Anastasios, Carretero, Cristina, Chetcuti-Zammit, Stefania, Valdivia, Pablo Cortegoso, González-Suárez, Begoña, Mussetto, Alessandro, Plevris, John, Robertson, Alexander, Rosa, Bruno, Toth, Ervin, Iakovidis, Dimitris K.

arXiv.org Artificial Intelligence

Sharing retrospectively acquired data is essential for both clinical research and training. Synthetic Data Generation (SDG), using Artificial Intelligence (AI) models, can overcome privacy barriers in sharing clinical data, enabling advancements in medical diagnostics. This study focuses on the clinical evaluation of medical SDG, with a proof-of-concept investigation on diagnosing Inflammatory Bowel Disease (IBD) using Wireless Capsule Endoscopy (WCE) images. The paper contributes by a) presenting a protocol for the systematic evaluation of synthetic images by medical experts and b) applying it to assess TIDE-II, a novel variational autoencoder-based model for high-resolution WCE image synthesis, with a comprehensive qualitative evaluation conducted by 10 international WCE specialists, focusing on image quality, diversity, realism, and clinical decision-making. The results show that TIDE-II generates clinically relevant WCE images, helping to address data scarcity and enhance diagnostic tools. The proposed protocol serves as a reference for future research on medical image-generation techniques.