Philosophy-informed Machine Learning
–arXiv.org Artificial Intelligence
A deep dive into the open literature shows that there are t hree fundamental limitations to current ML approaches, namely blackbox brittleness (which renders models uninterpretable and unreliable under distribution shift [2]), causal blindness (which conflates correlation with causation [3]), and alignment failures (which produce systems optimizing objectives misaligned with human values [4]) . These deficiencies stem from a profound philosophical poverty in how ML conceptualizes knowledge, reasoning, and values. The first fundamental limitation, b lackbox brittleness, manifests when trained models fail on seemingly trivial variations of their training distribution. For example, a vision model that accurately identifies stop signs under normal conditions might misclassify them entirely when small adversarial perturbations are applied [5] . Not surprisingly, t h e same brittleness extends beyond adversarial examples to everyday distribution shifts (e.g., natural language processing models exhibit performance degradation when processing text from different cultural contexts, etc.) [6] .
arXiv.org Artificial Intelligence
Sep-26-2025
- Country:
- North America > United States
- New Jersey (0.04)
- Pennsylvania > Allegheny County
- Pittsburgh (0.04)
- North America > United States
- Genre:
- Overview (0.46)
- Research Report (0.64)
- Industry:
- Health & Medicine > Therapeutic Area (0.32)
- Law (0.67)
- Technology:
- Information Technology > Artificial Intelligence
- Cognitive Science (1.00)
- Machine Learning
- Learning Graphical Models > Directed Networks
- Bayesian Learning (0.46)
- Neural Networks > Deep Learning (0.46)
- Performance Analysis > Accuracy (0.46)
- Learning Graphical Models > Directed Networks
- Natural Language (1.00)
- Representation & Reasoning
- Agents (0.93)
- Logic & Formal Reasoning (1.00)
- Uncertainty > Bayesian Inference (0.46)
- Information Technology > Artificial Intelligence