Goto

Collaborating Authors

 neoplasm


DeepGI: Explainable Deep Learning for Gastrointestinal Image Classification

Houmaidi, Walid, Hadadi, Mohamed, Sabiri, Youssef, Chtouki, Yousra

arXiv.org Artificial Intelligence

This paper presents a comprehensive comparative model analysis on a novel gastrointestinal medical imaging dataset, comprised of 4,000 endoscopic images spanning four critical disease classes: Diverticulosis, Neoplasm, Peritonitis, and Ureters. Leveraging state-of-the-art deep learning techniques, the study confronts common endoscopic challenges such as variable lighting, fluctuating camera angles, and frequent imaging artifacts. The best performing models, VGG16 and MobileNetV2, each achieved a test accuracy of 96.5%, while Xception reached 94.24%, establishing robust benchmarks and baselines for automated disease classification. In addition to strong classification performance, the approach includes explainable AI via Grad-CAM visualization, enabling identification of image regions most influential to model predictions and enhancing clinical interpretability. Experimental results demonstrate the potential for robust, accurate, and interpretable medical image analysis even in complex real-world conditions. This work contributes original benchmarks, comparative insights, and visual explanations, advancing the landscape of gastrointestinal computer-aided diagnosis and underscoring the importance of diverse, clinically relevant datasets and model explainability in medical AI research.


Revealing Interconnections between Diseases: from Statistical Methods to Large Language Models

Ermilova, Alina, Kornilov, Dmitrii, Samoilova, Sofia, Laptenkova, Ekaterina, Kolesnikova, Anastasia, Podplutova, Ekaterina, Sofya, Senotrusova, Sharaev, Maksim G.

arXiv.org Artificial Intelligence

Identifying disease interconnections through manual analysis of large-scale clinical data is labor-intensive, subjective, and prone to expert disagreement. While machine learning (ML) shows promise, three critical challenges remain: (1) selecting optimal methods from the vast ML landscape, (2) determining whether real-world clinical data (e.g., electronic health records, EHRs) or structured disease descriptions yield more reliable insights, (3) the lack of "ground truth," as some disease interconnections remain unexplored in medicine. Large language models (LLMs) demonstrate broad utility, yet they often lack specialized medical knowledge. Our framework integrates the following: (i) a statistical co-occurrence analysis and a masked language modeling (MLM) approach using real clinical data; (ii) domain-specific BERT variants (Med-BERT and BioClinicalBERT); (iii) a general-purpose BERT and document retrieval; and (iv) four LLMs (Mistral, DeepSeek, Qwen, and Y andexGPT). Our graph-based comparison of the obtained interconnection matrices shows that the LLM-based approach produces interconnections with the lowest diversity of ICD code connections to different diseases compared to other methods, including text-based and domain-based approaches. This suggests an important implication: LLMs have limited potential for discovering new interconnections. In the absence of ground truth databases for medical interconnections between ICD codes, our results constitute a valuable medical disease ontology that can serve as a founda-tional resource for future clinical research and artificial intelligence applications in healthcare. Electronic health records (EHRs) provide a valuable resource for studying disease progression and relationships between diagnoses. Machine learning (ML) can help discover hidden patterns in medical data, but many existing models are hard to interpret. In particular, it is not always clear whether large language models (LLMs) make predictions based on meaningful medical knowledge or simply rely on textual similarities between diagnosis descriptions (Cui et al., 2025). This is especially critical in healthcare, where model decisions must align with established medical knowledge and pathophysiological mechanisms. We also analyze and compare the obtained results and summarize it into medical disease ontology.


Explainable machine learning for neoplasms diagnosis via electrocardiograms: an externally validated study

Alcaraz, Juan Miguel Lopez, Haverkamp, Wilhelm, Strodthoff, Nils

arXiv.org Artificial Intelligence

Background: Neoplasms remains a leading cause of mortality worldwide, with timely diagnosis being crucial for improving patient outcomes. Current diagnostic methods are often invasive, costly, and inaccessible to many populations. Electrocardiogram (ECG) data, widely available and non-invasive, has the potential to serve as a tool for neoplasms diagnosis by using physiological changes in cardiovascular function associated with neoplastic prescences. Methods: This study explores the application of machine learning models to analyze ECG features for the diagnosis of neoplasms. We developed a pipeline integrating tree-based models with Shapley values for explainability. The model was trained and internally validated and externally validated on a second large-scale independent external cohort to ensure robustness and generalizability. Findings: The results demonstrate that ECG data can effectively capture neoplasms-associated cardiovascular changes, achieving high performance in both internal testing and external validation cohorts. Shapley values identified key ECG features influencing model predictions, revealing established and novel cardiovascular markers linked to neoplastic conditions. This non-invasive approach provides a cost-effective and scalable alternative for the diagnosis of neoplasms, particularly in resource-limited settings. Similarly, useful for the management of secondary cardiovascular effects given neoplasms therapies. Interpretation: This study highlights the feasibility of leveraging ECG signals and machine learning to enhance neoplasms diagnostics. By offering interpretable insights into cardio-neoplasms interactions, this approach bridges existing gaps in non-invasive diagnostics and has implications for integrating ECG-based tools into broader neoplasms diagnostic frameworks, as well as neoplasms therapy management.


Cardiac and extracardiac discharge diagnosis prediction from emergency department ECGs using deep learning

Strodthoff, Nils, Alcaraz, Juan Miguel Lopez, Haverkamp, Wilhelm

arXiv.org Machine Learning

Current deep learning algorithms designed for automatic ECG analysis have exhibited notable accuracy. However, akin to traditional electrocardiography, they tend to be narrowly focused and typically address a singular diagnostic condition. In this study, we specifically demonstrate the capability of a single model to predict a diverse range of both cardiac and non-cardiac discharge diagnoses based on a sole ECG collected in the emergency department. Among the 1,076 hierarchically structured ICD codes considered, our model achieves an AUROC exceeding 0.8 in 439 of them. This underscores the models proficiency in handling a wide array of diagnostic scenarios. We emphasize the potential of utilizing this model as a screening tool, potentially integrated into a holistic clinical decision support system for efficiently triaging patients in the emergency department. This research underscores the remarkable capabilities of comprehensive ECG analysis algorithms and the extensive range of possibilities facilitated by the open MIMIC-IV-ECG dataset. Finally, our data may play a pivotal role in revolutionizing the way ECG analysis is performed, marking a significant advancement in the field.


FRASIMED: a Clinical French Annotated Resource Produced through Crosslingual BERT-Based Annotation Projection

Zaghir, Jamil, Bjelogrlic, Mina, Goldman, Jean-Philippe, Aananou, Soukaïna, Gaudet-Blavignac, Christophe, Lovis, Christian

arXiv.org Artificial Intelligence

Natural language processing (NLP) applications such as named entity recognition (NER) for low-resource corpora do not benefit from recent advances in the development of large language models (LLMs) where there is still a need for larger annotated datasets. This research article introduces a methodology for generating translated versions of annotated datasets through crosslingual annotation projection. Leveraging a language agnostic BERT-based approach, it is an efficient solution to increase low-resource corpora with few human efforts and by only using already available open data resources. Quantitative and qualitative evaluations are often lacking when it comes to evaluating the quality and effectiveness of semi-automatic data generation strategies. The evaluation of our crosslingual annotation projection approach showed both effectiveness and high accuracy in the resulting dataset. As a practical application of this methodology, we present the creation of French Annotated Resource with Semantic Information for Medical Entities Detection (FRASIMED), an annotated corpus comprising 2'051 synthetic clinical cases in French. The corpus is now available for researchers and practitioners to develop and refine French natural language processing (NLP) applications in the clinical field (https://zenodo.org/record/8355629), making it the largest open annotated corpus with linked medical concepts in French.


Sound Explanation for Trustworthy Machine Learning

Jia, Kai, Saowakon, Pasapol, Appelbaum, Limor, Rinard, Martin

arXiv.org Artificial Intelligence

We take a formal approach to the explainability problem of machine learning systems. We argue against the practice of interpreting black-box models via attributing scores to input components due to inherently conflicting goals of attribution-based interpretation. We prove that no attribution algorithm satisfies specificity, additivity, completeness, and baseline invariance. We then formalize the concept, sound explanation, that has been informally adopted in prior work. A sound explanation entails providing sufficient information to causally explain the predictions made by a system. Finally, we present the application of feature selection as a sound explanation for cancer prediction models to cultivate trust among clinicians.


NIAPU: network-informed adaptive positive-unlabeled learning for disease gene identification

Stolfi, Paola, Mastropietro, Andrea, Pasculli, Giuseppe, Tieri, Paolo, Vergni, Davide

arXiv.org Artificial Intelligence

Motivation: Gene-disease associations are fundamental for understanding disease etiology and developing effective interventions and treatments. Identifying genes not yet associated with a disease due to a lack of studies is a challenging task in which prioritization based on prior knowledge is an important element. The computational search for new candidate disease genes may be eased by positive-unlabeled learning, the machine learning setting in which only a subset of instances are labeled as positive while the rest of the data set is unlabeled. In this work, we propose a set of effective network-based features to be used in a novel Markov diffusion-based multi-class labeling strategy for putative disease gene discovery. Results: The performances of the new labeling algorithm and the effectiveness of the proposed features have been tested on ten different disease data sets using three machine learning algorithms. The new features have been compared against classical topological and functional/ontological features and a set of network-and biological-derived features already used in gene discovery tasks. The predictive power of the integrated methodology in searching for new disease genes has been found to be competitive against state-of-the-art algorithms. Availability and implementation: The source code of NIAPU can be accessed at https://github.


Personalized Prognostic Models for Oncology: A Machine Learning Approach

Dooling, David, Kim, Angela, McAneny, Barbara, Webster, Jennifer

arXiv.org Machine Learning

We have applied a little-known data transformation to subsets of the Surveillance, Epidemiology, and End Results (SEER) publically available data of the National Cancer Institute (NCI) to make it suitable input to standard machine learning classifiers. This transformation properly treats the right-censored data in the SEER data and the resulting Random Forest and Multi-Layer Perceptron models predict full survival curves. Treating the 6, 12, and 60 months points of the resulting survival curves as 3 binary classifiers, the 18 resulting classifiers have AUC values ranging from .765 to .885. Further evidence that the models have generalized well from the training data is provided by the extremely high levels of agreement between the random forest and neural network models predictions on the 6, 12, and 60 month binary classifiers.