Human-aligned Deep Learning: Explainability, Causality, and Biological Inspiration
–arXiv.org Artificial Intelligence
This work aligns deep learning (DL) with human reasoning capabilities and needs to enable more efficient, interpretable, and robust image classification. We approach this from three perspectives: explainability, causality, and biological vision. Introduction and background open this work before diving into operative chapters. First, we assess neural networks' visualization techniques for medical images and validate an explainable-by-design method for breast mass classification. A comprehensive review at the intersection of XAI and causality follows, where we introduce a general scaffold to organize past and future research, laying the groundwork for our second perspective. In the causality direction, we propose novel modules that exploit feature co-occurrence in medical images, leading to more effective and explainable predictions. We further introduce CROCODILE, a general framework that integrates causal concepts, contrastive learning, feature disentanglement, and prior knowledge to enhance generalization. Lastly, we explore biological vision, examining how humans recognize objects, and propose CoCoReco, a connectivity-inspired network with context-aware attention mechanisms. Overall, our key findings include: (i) simple activation maximization lacks insight for medical imaging DL models; (ii) prototypical-part learning is effective and radiologically aligned; (iii) XAI and causal ML are deeply connected; (iv) weak causal signals can be leveraged without a priori information to improve performance and interpretability; (v) our framework generalizes across medical domains and out-of-distribution data; (vi) incorporating biological circuit motifs improves human-aligned recognition. This work contributes toward human-aligned DL and highlights pathways to bridge the gap between research and clinical adoption, with implications for improved trust, diagnostic accuracy, and safe deployment.
arXiv.org Artificial Intelligence
Apr-21-2025
- Country:
- Asia > Middle East (0.27)
- Europe > Germany (0.45)
- North America > United States (0.45)
- Genre:
- Overview (1.00)
- Research Report
- New Finding (1.00)
- Promising Solution (0.67)
- Summary/Review (1.00)
- Industry:
- Health & Medicine
- Diagnostic Medicine > Imaging (1.00)
- Therapeutic Area
- Neurology (1.00)
- Oncology (1.00)
- Pulmonary/Respiratory Diseases (1.00)
- Health & Medicine
- Technology:
- Information Technology
- Artificial Intelligence
- Cognitive Science > Problem Solving (0.87)
- Machine Learning
- Learning Graphical Models > Directed Networks
- Bayesian Learning (0.45)
- Neural Networks > Deep Learning (1.00)
- Performance Analysis > Accuracy (1.00)
- Statistical Learning (1.00)
- Learning Graphical Models > Directed Networks
- Natural Language > Explanation & Argumentation (0.93)
- Representation & Reasoning
- Diagnosis (0.87)
- Expert Systems (1.00)
- Vision > Image Understanding (0.87)
- Sensing and Signal Processing > Image Processing (1.00)
- Artificial Intelligence
- Information Technology