SpecXAI -- Spectral interpretability of Deep Learning Models
Druc, Stefan, Wooldridge, Peter, Krishnamurthy, Adarsh, Sarkar, Soumik, Balu, Aditya
–arXiv.org Artificial Intelligence
Deep learning has become a ubiquitous, versatile, and powerful technique that has a wide range of applications across many different fields such as image and speech recognition, natural language processing, and self-driving cars. The most popular application of deep learning is in the area of computer vision, where deep learning models are used for vision tasks such as image classification, object detection, and segmentation. While effective and powerful, one of the challenges that is plaguing deep learning models is explainability [1-4]. Unlike traditional machine learning models, which can be understood through the use of simple mathematical equations, deep learning models are highly complex and difficult to interpret. This makes it difficult to understand how the model arrived at a particular decision, which can be a problem in areas such as healthcare [5], or finance [6, 7] where transparency is important.
arXiv.org Artificial Intelligence
Feb-20-2023
- Country:
- Europe > Austria (0.28)
- North America > United States (0.28)
- Genre:
- Research Report (0.50)
- Industry:
- Automobiles & Trucks (0.66)
- Information Technology (0.48)
- Transportation (0.54)
- Technology: