Diagnosis
Timely Clinical Diagnosis through Active Test Selection
Estévez, Silas Ruhrberg, Astorga, Nicolás, van der Schaar, Mihaela
There is growing interest in using machine learning (ML) to support clinical diagnosis, but most approaches rely on static, fully observed datasets and fail to reflect the sequential, resource-aware reasoning clinicians use in practice. Diagnosis remains complex and error prone, especially in high-pressure or resource-limited settings, underscoring the need for frameworks that help clinicians make timely and cost-effective decisions. We propose ACTMED (Adaptive Clinical Test selection via Model-based Experimental Design), a diagnostic framework that integrates Bayesian Experimental Design (BED) with large language models (LLMs) to better emulate real-world diagnostic reasoning. At each step, ACTMED selects the test expected to yield the greatest reduction in diagnostic uncertainty for a given patient. LLMs act as flexible simulators, generating plausible patient state distributions and supporting belief updates without requiring structured, task-specific training data. Clinicians can remain in the loop; reviewing test suggestions, interpreting intermediate outputs, and applying clinical judgment throughout. We evaluate ACTMED on real-world datasets and show it can optimize test selection to improve diagnostic accuracy, interpretability, and resource use. This represents a step toward transparent, adaptive, and clinician-aligned diagnostic systems that generalize across settings with reduced reliance on domain-specific data.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > United States > Wisconsin > Dane County > Madison (0.04)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- Europe > Spain (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Diagnosis (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.96)
- (2 more...)
Argumentative Debates for Transparent Bias Detection [Technical Report]
Ayoobi, Hamed, Potyka, Nico, Rapberger, Anna, Toni, Francesca
As the use of AI in society grows, addressing emerging biases is essential to prevent systematic discrimination. Several bias detection methods have been proposed, but, with few exceptions, these tend to ignore transparency. Instead, interpretability and explainability are core requirements for algorithmic fairness, even more so than for other algorithmic solutions, given the human-oriented nature of fairness. We present ABIDE (Argumentative BIas detection by DEbate), a novel framework that structures bias detection transparently as debate, guided by an underlying argument graph as understood in (formal and computational) argumentation. The arguments are about the success chances of groups in local neighbourhoods and the significance of these neighbourhoods. We evaluate ABIDE experimentally and demonstrate its strengths in performance against an argumentative baseline.
- Europe > United Kingdom > England > Greater London > London (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Netherlands > Groningen (0.04)
- Europe > Germany > North Rhine-Westphalia > Arnsberg Region > Dortmund (0.04)
Adaptive Diagnostic Reasoning Framework for Pathology with Multimodal Large Language Models
Hong, Yunqi, Kao, Johnson, Edwards, Liam, Liu, Nein-Tzu, Huang, Chung-Yen, Oliveira-Kowaleski, Alex, Hsieh, Cho-Jui, Lin, Neil Y. C.
AI tools in pathology have improved screening throughput, standardized quantification, and revealed prognostic patterns that inform treatment. However, adoption remains limited because most systems still lack the human-readable reasoning needed to audit decisions and prevent errors. We present RECAP-PATH, an interpretable framework that establishes a self-learning paradigm, shifting off-the-shelf multimodal large language models from passive pattern recognition to evidence-linked diagnostic reasoning. At its core is a two-phase learning process that autonomously derives diagnostic criteria: diversification expands pathology-style explanations, while optimization refines them for accuracy. This self-learning approach requires only small labeled sets and no white-box access or weight updates to generate cancer diagnoses. Evaluated on breast and prostate datasets, RECAP-PATH produced rationales aligned with expert assessment and delivered substantial gains in diagnostic accuracy over baselines. By uniting visual understanding with reasoning, RECAP-PATH provides clinically trustworthy AI and demonstrates a generalizable path toward evidence-linked interpretation.
- North America > United States > California > Los Angeles County > Los Angeles (0.29)
- North America > United States > New Mexico > Bernalillo County > Albuquerque (0.04)
- Health & Medicine > Therapeutic Area > Oncology (1.00)
- Health & Medicine > Diagnostic Medicine (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Diagnosis (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- (2 more...)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.14)
- North America > United States > Michigan (0.04)
- North America > Canada > British Columbia (0.04)
- (5 more...)
- Information Technology > Artificial Intelligence > Machine Learning > Decision Tree Learning (0.52)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Diagnosis (0.42)
- North America > United States > California > Los Angeles County > Los Angeles (0.28)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > Virginia > Arlington County > Arlington (0.04)
- (2 more...)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Diagnosis (0.40)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.34)
- South America > Chile (0.05)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- Europe > Finland > Uusimaa > Helsinki (0.04)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.66)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.46)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Diagnosis (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Decision Tree Learning (0.46)
Supplementary Material: Identification of Partially Observed Linear Causal Models Jeffrey Adams 1, Niels Richard Hansen
Let us present the complete theorem first, and then give its proof. We are now ready to present Theorem 1. Theorem 1 But since F induces a different DAG, F is not identified up to trivialities. Proposition 4. F or any graph G there exists F F There are two cases to consider. The backward direction is obvious. This follows from definitions and acyclicity.1.4.5 Proof of Theorem 3 Theorem 3. Then F is identifiable up to trivialities.
- Oceania > Australia > New South Wales > Sydney (0.04)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- Europe > Denmark > Capital Region > Copenhagen (0.04)
- Information Technology > Artificial Intelligence > Machine Learning (0.68)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Diagnosis (0.40)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Denmark > Capital Region > Copenhagen (0.04)
- Health & Medicine (0.46)
- Government (0.46)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty (0.46)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Diagnosis (0.42)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Diagnosis (0.51)
- Information Technology > Artificial Intelligence > Machine Learning > Decision Tree Learning (0.51)
Statistical Undecidability in Linear, Non-Gaussian Causal Models in the Presence of Latent Confounders
Gaussian, causal orientation is not identified from observational data -- even if faithfulness is satisfied (Spirtes et al., 2002). Shimizu et al. (2006) showed that acyclic, linear, non -Gaussian (LiNGAM) causal models are identified from observational data, so long as no latent confounders are present. That holds even when faithfulness fails. Genin and Mayo-Wilson (2020) refine that result: not only are causal relationships identified, but causal orientation is statistically decidable .