explainable ai
- Europe > Sweden > Skåne County > Malmö (0.04)
- North America > United States > New Jersey > Middlesex County > Piscataway (0.04)
- North America > United States > Massachusetts > Middlesex County > Lexington (0.04)
- (2 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.72)
- Health & Medicine (1.00)
- Government > Military (0.95)
- Government > Regional Government > North America Government > United States Government (0.46)
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Explanation & Argumentation (0.83)
Navigating the Maze of Explainable AI: A Systematic Approach to Evaluating Methods and Metrics
Explainable AI (XAI) is a rapidly growing domain with a myriad of proposed methods as well as metrics aiming to evaluate their efficacy. However, current studies are often of limited scope, examining only a handful of XAI methods and ignoring underlying design parameters for performance, such as the model architecture or the nature of input data. Moreover, they often rely on one or a few metrics and neglect thorough validation, increasing the risk of selection bias and ignoring discrepancies among metrics. These shortcomings leave practitioners confused about which method to choose for their problem. In response, we introduce LATEC, a large-scale benchmark that critically evaluates 17 prominent XAI methods using 20 distinct metrics.
The Utility of Explainable AI in Ad Hoc Human-Machine Teaming
Recent advances in machine learning have led to growing interest in Explainable AI (xAI) to enable humans to gain insight into the decision-making of machine learning models. Despite this recent interest, the utility of xAI techniques has not yet been characterized in human-machine teaming. Importantly, xAI offers the promise of enhancing team situational awareness (SA) and shared mental model development, which are the key characteristics of effective human-machine teams. Rapidly developing such mental models is especially critical in ad hoc human-machine teaming, where agents do not have a priori knowledge of others' decision-making strategies.
A Clinically Interpretable Deep CNN Framework for Early Chronic Kidney Disease Prediction Using Grad-CAM-Based Explainable AI
Ayub, Anas Bin, Niha, Nilima Sultana, Haque, Md. Zahurul
Chronic Kidney Disease (CKD) constitutes a major global medical burden, marked by the gradual deterioration of renal function, which results in the impaired clearance of metabolic waste and disturbances in systemic fluid homeostasis. Owing to its substantial contribution to worldwide morbidity and mortality, the development of reliable and efficient diagnostic approaches is critically important to facilitate early detection and prompt clinical management. This study presents a deep convolutional neural network (CNN) for early CKD detection from CT kidney images, complemented by class balancing using Synthetic Minority Over-sampling Technique (SMOTE) and interpretability via Gradient-weighted Class Activation Mapping (Grad-CAM). The model was trained and evaluated on the CT KIDNEY DATASET, which contains 12,446 CT images, including 3,709 cyst, 5,077 normal, 1,377 stone, and 2,283 tumor cases. The proposed deep CNN achieved a remarkable classification performance, attaining 100% accuracy in the early detection of chronic kidney disease (CKD). This significant advancement demonstrates strong potential for addressing critical clinical diagnostic challenges and enhancing early medical intervention strategies.
- Asia > Bangladesh > Dhaka Division > Dhaka District > Dhaka (0.05)
- North America > United States > Maryland > Montgomery County > Bethesda (0.04)
- North America > United States > Georgia > Fulton County > Atlanta (0.04)
- Asia > Middle East > UAE > Dubai Emirate > Dubai (0.04)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
xEEGNet: Towards Explainable AI in EEG Dementia Classification
Zanola, Andrea, Tshimanga, Louis Fabrice, Del Pup, Federico, Baiesi, Marco, Atzori, Manfredo
This work presents xEEGNet, a novel, compact, and explainable neural network for EEG data analysis. It is fully interpretable and reduces overfitting through major parameter reduction. As an applicative use case, we focused on classifying common dementia conditions, Alzheimer's and frontotemporal dementia, versus controls. xEEGNet is broadly applicable to other neurological conditions involving spectral alterations. We initially used ShallowNet, a simple and popular model from the EEGNet-family. Its structure was analyzed and gradually modified to move from a "black box" to a more transparent model, without compromising performance. The learned kernels and weights were examined from a clinical standpoint to assess medical relevance. Model variants, including ShallowNet and the final xEEGNet, were evaluated using robust Nested-Leave-N-Subjects-Out cross-validation for unbiased performance estimates. Variability across data splits was explained using embedded EEG representations, grouped by class and set, with pairwise separability to quantify group distinction. Overfitting was assessed through training-validation loss correlation and training speed. xEEGNet uses only 168 parameters, 200 times fewer than ShallowNet, yet retains interpretability, resists overfitting, achieves comparable median performance (-1.5%), and reduces variability across splits. This variability is explained by embedded EEG representations: higher accuracy correlates with greater separation between test set controls and Alzheimer's cases, without significant influence from training data. xEEGNet's ability to filter specific EEG bands, learn band-specific topographies, and use relevant spectral features demonstrates its interpretability. While large deep learning models are often prioritized for performance, this study shows smaller architectures like xEEGNet can be equally effective in EEG pathology classification.
- Europe > Italy (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > New Mexico > Bernalillo County > Albuquerque (0.04)
- (4 more...)
Analysis of Incursive Breast Cancer in Mammograms Using YOLO, Explainability, and Domain Adaptation
Adhikari, Jayan, Joshi, Prativa, Baral, Susish
Abstract--Deep learning models for breast cancer detection from mammographic images have significant reliability problems when presented with Out-of-Distribution (OOD) inputs such as other imaging modalities (CT, MRI, X-ray) or equipment variations, leading to unreliable detection and misdiagnosis. Our strategy establishes an in-domain gallery via cosine similarity to rigidly reject non-mammographic inputs prior to processing, ensuring that only domain-associated images supply the detection pipeline. The OOD detection component achieves 99.77% general accuracy with immaculate 100% accuracy on OOD test sets, effectively eliminating irrelevant imaging modalities. ResNet50 was selected as the optimum backbone after 12 CNN architecture searches. The joint framework unites OOD robustness with high detection performance (mAP@0.5: Experimental validation establishes that OOD filtering significantly improves system reliability by preventing false alarms on out-of-distribution inputs while maintaining higher detection accuracy on mammographic data. The present study offers a fundamental foundation for the deployment of reliable AI-based breast cancer detection systems in diverse clinical environments with inherent data heterogeneity. A global health concern, breast cancer is the second-highest cause of cancer related to mortality in women. It has been recorded as the most diagnosed disease in the world in 2020 [1]. According to the World Health Organization, all types of cancer account for 626700 global deaths of women, out of which the breast is the predominant and second leading cause [2]. If diagnosed in its early development stage, the survival rate are likely to be high and the treatment cost will get reduced [3]. Studies has found that 30% breast cancer are diagnosed when the size of the mass is 30mm.
- North America > United States (0.04)
- Oceania > New Zealand (0.04)
- Europe > Portugal (0.04)
- (2 more...)
Reversing the Lens: Using Explainable AI to Understand Human Expertise
Rahman, Roussel, Mishra, Aashwin Ananda, Hu, Wan-Lin
Both humans and machine learning models learn from experience, particularly in safety- and reliability-critical domains. While psychology seeks to understand human cognition, the field of Explainable AI (XAI) develops methods to interpret machine learning models. This study bridges these domains by applying computational tools from XAI to analyze human learning. We modeled human behavior during a complex real-world task -- tuning a particle accelerator -- by constructing graphs of operator subtasks. Applying techniques such as community detection and hierarchical clustering to archival operator data, we reveal how operators decompose the problem into simpler components and how these problem-solving structures evolve with expertise. Our findings illuminate how humans develop efficient strategies in the absence of globally optimal solutions, and demonstrate the utility of XAI-based methods for quantitatively studying human cognition.
- North America > United States > California > San Mateo County > Menlo Park (0.05)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.70)
- Information Technology > Artificial Intelligence > Natural Language > Explanation & Argumentation (0.61)
- Information Technology > Artificial Intelligence > Cognitive Science > Problem Solving (0.47)
Explainable AI for Curie Temperature Prediction in Magnetic Materials
Ajaib, M. Adeel, Nasir, Fariha, Rehman, Abdul
Traditional approaches based on quantum mechanical computations or empirical models are often limited in scalability and accuracy. In recent years, machine learning (ML) has emerged as a promising alternative for property prediction across materials science domains [1-9]. Building on this momentum, several recent studies have proposed the use of ML models trained on curated magnetic datasets. In particular, the recent study [10] introduced the NE-MAD database, which aggregates experimentally measured magnetic transition temperatures and compositions. Similarly, the study by [11] utilized two of the largest available datasets of experimental Curie temperatures--comprising over 2,500 materials for training and more than 3,000 entries for validation--to compare machine learning strategies for predicting Curie temperature solely from chemical composition. Our work is inspired by these prior efforts and aims to improve the predictive accuracy and gain insights into model in-terpretability. We develop a pipeline that starts from the NE-MAD dataset, augments it with compositional and elemental features, and evaluates several ML models. A key contribution of our work is the integration of explainable AI (XAI) through SHAP (SHapley Additive exPlanations) analysis, which allows us to quantify how each input feature contributes to the model's prediction. Moreover, we benchmark our models on external datasets from literature to demonstrate generalization.
Explainable AI For Early Detection Of Sepsis
Thakur, Atharva, Dhumal, Shruti
Department of Multidisciplinary Engineering (AI & DS) Vishwakarma Institute of Technology, Pune, 411037, Maharashtra, India Abstract - Sepsis is a potentially fatal medical disorder that needs to be identified and treated right away to avoid fatalities. It must be quickly identified and treated in order to stop it from developing into severe sepsis, septic shock, and multi-organ failure. Sepsis remains a significant problem for doctors despite advancements in medical technology and treatment methods. The beginning of the disease has been successfully predicted by machine learning models in recent years, but due to their black-box character, it is challenging to interpret these predictions and comprehend the underlying illness mechanisms. In this research, we propose a comprehensible AI method for sepsis analysis that combines machine learning with clinical knowledge and expertise in the domain. Our method allows clinicians to understand and verify the model's predictions based on clinical expertise and preexisting beliefs, in addition to providing precise predictions of the onset of sepsis. Keywords - Sepsis, Artificial Intelligence, Machine Learning, Explainable AI, Sensitivity Analysis I. INTRODUCTION As the world continues to advance in technology, the potential of artificial intelligence (AI) in healthcare is becoming more apparent.
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.95)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.91)
- Information Technology > Artificial Intelligence > Natural Language > Explanation & Argumentation (0.89)
Unlocking the Black Box: A Five-Dimensional Framework for Evaluating Explainable AI in Credit Risk
The financial industry faces a significant challenge modeling and risk portfolios: balancing the predictability of advanced machine learning models, neural network models, and explainability required by regulatory entities (such as Office of the Comptroller of the Currency, Consumer Financial Protection Bureau). This paper intends to fill the gap in the application between these "black box" models and explainability frameworks, such as LIME and SHAP. Authors elaborate on the application of these frameworks on different models and demonstrates the more complex models with better prediction powers could be applied and reach the same level of the explainability, using SHAP and LIME. Beyond the comparison and discussion of performances, this paper proposes a novel five dimensional framework evaluating Inherent Interpretability, Global Explanations, Local Explanations, Consistency, and Complexity to offer a nuanced method for assessing and comparing model explainability beyond simple accuracy metrics. This research demonstrates the feasibility of employing sophisticated, high performing ML models in regulated financial environments by utilizing modern explainability techniques and provides a structured approach to evaluate the crucial trade offs between model performance and interpretability.
- North America > United States > Illinois > Cook County > Chicago (0.04)
- Europe > United Kingdom > England (0.04)
- Government > Regional Government > North America Government > United States Government (1.00)
- Banking & Finance > Economy (1.00)
- Banking & Finance > Credit (1.00)