Kateb, Reem
Generalizable and Explainable Deep Learning for Medical Image Computing: An Overview
Chaddad, Ahmad, Hu, Yan, Wu, Yihang, Wen, Binbin, Kateb, Reem
Objective. This paper presents an overview of generalizable and explainable artificial intelligence (XAI) in deep learning (DL) for medical imaging, aimed at addressing the urgent need for transparency and explainability in clinical applications. Methodology. We propose to use four CNNs in three medical datasets (brain tumor, skin cancer, and chest x-ray) for medical image classification tasks. In addition, we perform paired t-tests to show the significance of the differences observed between different methods. Furthermore, we propose to combine ResNet50 with five common XAI techniques to obtain explainable results for model prediction, aiming at improving model transparency. We also involve a quantitative metric (confidence increase) to evaluate the usefulness of XAI techniques. Key findings. The experimental results indicate that ResNet50 can achieve feasible accuracy and F1 score in all datasets (e.g., 86.31\% accuracy in skin cancer). Furthermore, the findings show that while certain XAI methods, such as XgradCAM, effectively highlight relevant abnormal regions in medical images, others, like EigenGradCAM, may perform less effectively in specific scenarios. In addition, XgradCAM indicates higher confidence increase (e.g., 0.12 in glioma tumor) compared to GradCAM++ (0.09) and LayerCAM (0.08). Implications. Based on the experimental results and recent advancements, we outline future research directions to enhance the robustness and generalizability of DL models in the field of biomedical imaging.
FAA-CLIP: Federated Adversarial Adaptation of CLIP
Wu, Yihang, Chaddad, Ahmad, Desrosiers, Christian, Daqqaq, Tareef, Kateb, Reem
--Despite the remarkable performance of vision language models (VLMs) such as Contrastive Language Image Pre-training (CLIP), the large size of these models is a considerable obstacle to their use in federated learning (FL) systems where the parameters of local client models need to be transferred to a global server for aggregation. Another challenge in FL is the heterogeneity of data from different clients, which affects the generalization performance of the solution. In addition, natural pre-trained VLMs exhibit poor generalization ability in the medical datasets, suggests there exists a domain gap. T o solve these issues, we introduce a novel method for the Federated Adversarial Adaptation (F AA) of CLIP . Our method, named F AA-CLIP, handles the large communication costs of CLIP using a light-weight feature adaptation module (F AM) for aggregation, effectively adapting this VLM to each client's data while greatly reducing the number of parameters to transfer . By keeping CLIP frozen and only updating the F AM parameters, our method is also computationally efficient. Unlike existing approaches, our F AA-CLIP method directly addresses the problem of domain shifts across clients via a domain adaptation (DA) module. This module employs a domain classifier to predict if a given sample is from the local client or the global server, allowing the model to learn domain-invariant representations. Extensive experiments on six different datasets containing both natural and medical images demonstrate that F AA-CLIP can generalize well on both natural and medical datasets compared to recent FL approaches. Our codes are available at https://github.com/AIPMLab/F While models based on deep learning (DL) have achieved ground-breaking results in a broad range of computer vision and natural language understanding tasks, their performance is often dependent on the availability of large datasets [1]. In recent years, there has been a growing concern on ensuring data privacy and security, with many organizations implementing regulations and laws such as the EU General Data Protection Regulation (GDPR) [2]. These restrictions on sharing raw data from different organizations poses a siginificant challenge for training robust DL models in fields like medical imaging where privacy is of utmost importance. One of the most promising solutions to this problem is federated learning (FL).
Explainable, Domain-Adaptive, and Federated Artificial Intelligence in Medicine
Chaddad, Ahmad, lu, Qizong, Li, Jiali, Katib, Yousef, Kateb, Reem, Tanougast, Camel, Bouridane, Ahmed, Abdulkadir, Ahmed
Artificial intelligence (AI) continues to transform data analysis in many domains. Progress in each domain is driven by a growing body of annotated data, increased computational resources, and technological innovations. In medicine, the sensitivity of the data, the complexity of the tasks, the potentially high stakes, and a requirement of accountability give rise to a particular set of challenges. In this review, we focus on three key methodological approaches that address some of the particular challenges in AI-driven medical decision making. (1) Explainable AI aims to produce a human-interpretable justification for each output. Such models increase confidence if the results appear plausible and match the clinicians expectations. However, the absence of a plausible explanation does not imply an inaccurate model. Especially in highly non-linear, complex models that are tuned to maximize accuracy, such interpretable representations only reflect a small portion of the justification. (2) Domain adaptation and transfer learning enable AI models to be trained and applied across multiple domains. For example, a classification task based on images acquired on different acquisition hardware. (3) Federated learning enables learning large-scale models without exposing sensitive personal health information. Unlike centralized AI learning, where the centralized learning machine has access to the entire training data, the federated learning process iteratively updates models across multiple sites by exchanging only parameter updates, not personal health data. This narrative review covers the basic concepts, highlights relevant corner-stone and state-of-the-art research in the field, and discusses perspectives.