Towards Gradient-based Time-Series Explanations through a SpatioTemporal Attention Network
–arXiv.org Artificial Intelligence
However, it is not desirable to apply AI fully autonomously as wrong outcomes of AI models in high-stake domains could have serious impacts on people. Regardless of the performance of an AI model, the end-users desire to understand the evidence on the outcome of an AI model [35]. A growing body of research investigates how to generate explanations of an AI model and augment user's decision-making tasks [2, 18, 25]. Researchers have explored various techniques to make AI interpretable and explainable [15]. These explainable AI techniques can be broadly categorized into inherently interpretable models (e.g.
arXiv.org Artificial Intelligence
May-17-2024
- Country:
- Asia > Singapore
- Central Region > Singapore (0.04)
- Europe > Netherlands
- North Holland > Amsterdam (0.04)
- Asia > Singapore
- Genre:
- Research Report > New Finding (0.47)
- Industry:
- Health & Medicine (1.00)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning
- Neural Networks > Deep Learning (0.97)
- Statistical Learning (1.00)
- Natural Language (0.96)
- Vision (1.00)
- Machine Learning
- Information Technology > Artificial Intelligence