In Defence of Post-hoc Explainability
–arXiv.org Artificial Intelligence
The widespread adoption of machine learning in scientific research has created a fundamental tension between model opacity and scientific understanding. Whilst some advocate for intrinsically interpretable models, we introduce Computational Interpretabilism (CI) as a philosophical framework for post-hoc interpretability in scientific AI. Drawing parallels with human expertise, where post-hoc rationalisation coexists with reliable performance, CI establishes that scientific knowledge emerges through structured model interpretation when properly bounded by empirical validation. Through mediated understanding and bounded factivity, we demonstrate how post-hoc methods achieve epistemically justified insights without requiring complete mechanical transparency, resolving tensions between model complexity and scientific comprehension.
arXiv.org Artificial Intelligence
Dec-23-2024
- Country:
- Europe > United Kingdom
- England
- Greater London > London (0.04)
- Oxfordshire > Oxford (0.04)
- England
- North America > United States
- Illinois > Cook County
- Chicago (0.04)
- New York (0.04)
- Illinois > Cook County
- Europe > United Kingdom
- Genre:
- Research Report (0.82)
- Industry:
- Health & Medicine (0.67)
- Law (0.68)
- Technology: