Goto

Collaborating Authors

 Diagnosis


Timely Clinical Diagnosis through Active Test Selection

Estévez, Silas Ruhrberg, Astorga, Nicolás, van der Schaar, Mihaela

arXiv.org Artificial Intelligence

There is growing interest in using machine learning (ML) to support clinical diagnosis, but most approaches rely on static, fully observed datasets and fail to reflect the sequential, resource-aware reasoning clinicians use in practice. Diagnosis remains complex and error prone, especially in high-pressure or resource-limited settings, underscoring the need for frameworks that help clinicians make timely and cost-effective decisions. We propose ACTMED (Adaptive Clinical Test selection via Model-based Experimental Design), a diagnostic framework that integrates Bayesian Experimental Design (BED) with large language models (LLMs) to better emulate real-world diagnostic reasoning. At each step, ACTMED selects the test expected to yield the greatest reduction in diagnostic uncertainty for a given patient. LLMs act as flexible simulators, generating plausible patient state distributions and supporting belief updates without requiring structured, task-specific training data. Clinicians can remain in the loop; reviewing test suggestions, interpreting intermediate outputs, and applying clinical judgment throughout. We evaluate ACTMED on real-world datasets and show it can optimize test selection to improve diagnostic accuracy, interpretability, and resource use. This represents a step toward transparent, adaptive, and clinician-aligned diagnostic systems that generalize across settings with reduced reliance on domain-specific data.


Argumentative Debates for Transparent Bias Detection [Technical Report]

Ayoobi, Hamed, Potyka, Nico, Rapberger, Anna, Toni, Francesca

arXiv.org Artificial Intelligence

As the use of AI in society grows, addressing emerging biases is essential to prevent systematic discrimination. Several bias detection methods have been proposed, but, with few exceptions, these tend to ignore transparency. Instead, interpretability and explainability are core requirements for algorithmic fairness, even more so than for other algorithmic solutions, given the human-oriented nature of fairness. We present ABIDE (Argumentative BIas detection by DEbate), a novel framework that structures bias detection transparently as debate, guided by an underlying argument graph as understood in (formal and computational) argumentation. The arguments are about the success chances of groups in local neighbourhoods and the significance of these neighbourhoods. We evaluate ABIDE experimentally and demonstrate its strengths in performance against an argumentative baseline.


Adaptive Diagnostic Reasoning Framework for Pathology with Multimodal Large Language Models

Hong, Yunqi, Kao, Johnson, Edwards, Liam, Liu, Nein-Tzu, Huang, Chung-Yen, Oliveira-Kowaleski, Alex, Hsieh, Cho-Jui, Lin, Neil Y. C.

arXiv.org Artificial Intelligence

AI tools in pathology have improved screening throughput, standardized quantification, and revealed prognostic patterns that inform treatment. However, adoption remains limited because most systems still lack the human-readable reasoning needed to audit decisions and prevent errors. We present RECAP-PATH, an interpretable framework that establishes a self-learning paradigm, shifting off-the-shelf multimodal large language models from passive pattern recognition to evidence-linked diagnostic reasoning. At its core is a two-phase learning process that autonomously derives diagnostic criteria: diversification expands pathology-style explanations, while optimization refines them for accuracy. This self-learning approach requires only small labeled sets and no white-box access or weight updates to generate cancer diagnoses. Evaluated on breast and prostate datasets, RECAP-PATH produced rationales aligned with expert assessment and delivered substantial gains in diagnostic accuracy over baselines. By uniting visual understanding with reasoning, RECAP-PATH provides clinically trustworthy AI and demonstrates a generalizable path toward evidence-linked interpretation.





Supplementary Material: Identification of Partially Observed Linear Causal Models Jeffrey Adams 1, Niels Richard Hansen

Neural Information Processing Systems

Let us present the complete theorem first, and then give its proof. We are now ready to present Theorem 1. Theorem 1 But since F induces a different DAG, F is not identified up to trivialities. Proposition 4. F or any graph G there exists F F There are two cases to consider. The backward direction is obvious. This follows from definitions and acyclicity.1.4.5 Proof of Theorem 3 Theorem 3. Then F is identifiable up to trivialities.




Statistical Undecidability in Linear, Non-Gaussian Causal Models in the Presence of Latent Confounders

Neural Information Processing Systems

Gaussian, causal orientation is not identified from observational data -- even if faithfulness is satisfied (Spirtes et al., 2002). Shimizu et al. (2006) showed that acyclic, linear, non -Gaussian (LiNGAM) causal models are identified from observational data, so long as no latent confounders are present. That holds even when faithfulness fails. Genin and Mayo-Wilson (2020) refine that result: not only are causal relationships identified, but causal orientation is statistically decidable .