Inductive Models for Artificial Intelligence Systems are Insufficient without Good Explanations
–arXiv.org Artificial Intelligence
Instead of providing an explanation networks (ANNs), which are effective at approximating of a phenomenon, models trained this way present complex functions but often lack transparency us with yet another phenomenon that needs an explanation and explanatory power. It highlights the [Wiegreffe and Pinter, 2019; Jain and Wallace, 2019]. 'problem of induction'--the philosophical issue Thus, despite the recent surge in the field of'explainable that past observations may not necessarily predict AI' [Doshi-Velez and Kim, 2017], which attempts to provide future events, a challenge that ML models face some insight in to the generalizations made by trained models, when encountering new, unseen data. The paper argues it may be the case that the underlying problem of induction for the importance of not just making predictions and a lack of good explanations will remain so long as but also providing good explanations, a feature we use machine induction as the primary path in AI. that current models often fail to deliver.
arXiv.org Artificial Intelligence
Jan-17-2024
- Country:
- Asia > Russia (0.04)
- Europe
- Ireland (0.04)
- Russia (0.04)
- United Kingdom (0.04)
- North America > United States
- Virginia (0.04)
- Genre:
- Research Report (1.00)
- Industry:
- Health & Medicine (0.68)
- Technology: