SpecXAI -- Spectral interpretability of Deep Learning Models

Druc, Stefan, Wooldridge, Peter, Krishnamurthy, Adarsh, Sarkar, Soumik, Balu, Aditya

arXiv.org Artificial Intelligence 

Deep learning has become a ubiquitous, versatile, and powerful technique that has a wide range of applications across many different fields such as image and speech recognition, natural language processing, and self-driving cars. The most popular application of deep learning is in the area of computer vision, where deep learning models are used for vision tasks such as image classification, object detection, and segmentation. While effective and powerful, one of the challenges that is plaguing deep learning models is explainability [1-4]. Unlike traditional machine learning models, which can be understood through the use of simple mathematical equations, deep learning models are highly complex and difficult to interpret. This makes it difficult to understand how the model arrived at a particular decision, which can be a problem in areas such as healthcare [5], or finance [6, 7] where transparency is important.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found