Risk Assessment of Lymph Node Metastases in Endometrial Cancer Patients: A Causal Approach

Zanga, Alessio, Bernasconi, Alice, Lucas, Peter J. F., Pijnenborg, Hanny, Reijnen, Casper, Scutari, Marco, Stella, Fabio

arXiv.org Artificial Intelligence 

Artificial Intelligence (AI) has found many applications in medicine [15] and, more specifically, in cancer research [32] in the form of predictive models for diagnosis [14], prognosis [6] and therapy planning [12]. As a subfield of AI, Machine Learning (ML) and in particular Deep Learning (DL) has achieved significant results, especially in image processing [3]. Nonetheless, ML and DL models have limited explainability [13] because of their black-box design, which limits their adoption in the clinical field: clinicians and physicians are reluctant to include models that are not transparent in their decision process [24]. While recent research on Explainable AI (XAI) [11] has attacked this problem, DL models are still opaque and difficult to interpret. In contrast, in Probabilistic Graphical Models (PGMs) the interactions between different variables are encoded explicitly: the joint probability distribution P of the variables of interest factorizes according to a graph G, hence the "graphical" connotation. Bayesian Networks (BNs) [23], which we will describe in Section 3.1, are an instance of PGMs that can be used as causal models. In turn, this makes them ideal to use as decision support systems and overcome the limitations of the predictions based on probabilistic associations produced by other ML models [1, 19].

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found