Goto

Collaborating Authors

 Bernasconi, Alice


Towards a Transportable Causal Network Model Based on Observational Healthcare Data

arXiv.org Artificial Intelligence

Over the last decades, many prognostic models based on artificial intelligence techniques have been used to provide detailed predictions in healthcare. Unfortunately, the real-world observational data used to train and validate these models are almost always affected by biases that can strongly impact the outcomes validity: two examples are values missing not-at-random and selection bias. Addressing them is a key element in achieving transportability and in studying the causal relationships that are critical in clinical decision making, going beyond simpler statistical approaches based on probabilistic association. In this context, we propose a novel approach that combines selection diagrams, missingness graphs, causal discovery and prior knowledge into a single graphical model to estimate the cardiovascular risk of adolescent and young females who survived breast cancer. We learn this model from data comprising two different cohorts of patients. The resulting causal network model is validated by expert clinicians in terms of risk assessment, accuracy and explainability, and provides a prognostic model that outperforms competing machine learning methods.


The Impact of Missing Data on Causal Discovery: A Multicentric Clinical Study

arXiv.org Artificial Intelligence

Causal inference for testing clinical hypotheses from observational data presents many difficulties because the underlying data-generating model and the associated causal graph are not usually available. Furthermore, observational data may contain missing values, which impact the recovery of the causal graph by causal discovery algorithms: a crucial issue often ignored in clinical studies. In this work, we use data from a multi-centric study on endometrial cancer to analyze the impact of different missingness mechanisms on the recovered causal graph. This is achieved by extending state-of-the-art causal discovery algorithms to exploit expert knowledge without sacrificing theoretical soundness. We validate the recovered graph with expert physicians, showing that our approach finds clinically-relevant solutions. Finally, we discuss the goodness of fit of our graph and its consistency from a clinical decision-making perspective using graphical separation to validate causal pathways.


Risk Assessment of Lymph Node Metastases in Endometrial Cancer Patients: A Causal Approach

arXiv.org Artificial Intelligence

Artificial Intelligence (AI) has found many applications in medicine [15] and, more specifically, in cancer research [32] in the form of predictive models for diagnosis [14], prognosis [6] and therapy planning [12]. As a subfield of AI, Machine Learning (ML) and in particular Deep Learning (DL) has achieved significant results, especially in image processing [3]. Nonetheless, ML and DL models have limited explainability [13] because of their black-box design, which limits their adoption in the clinical field: clinicians and physicians are reluctant to include models that are not transparent in their decision process [24]. While recent research on Explainable AI (XAI) [11] has attacked this problem, DL models are still opaque and difficult to interpret. In contrast, in Probabilistic Graphical Models (PGMs) the interactions between different variables are encoded explicitly: the joint probability distribution P of the variables of interest factorizes according to a graph G, hence the "graphical" connotation. Bayesian Networks (BNs) [23], which we will describe in Section 3.1, are an instance of PGMs that can be used as causal models. In turn, this makes them ideal to use as decision support systems and overcome the limitations of the predictions based on probabilistic associations produced by other ML models [1, 19].