The explainability problem - can new approaches pry open the AI black box?
The so-called "black-box" aspect of AI, usually referred to as the explainability problem, or X(AI) for short, arose slowly over the past few years. Still, with the rapid development in AI, it is now considered a significant problem. How can you trust a model if you cannot understand how it reaches its conclusions? For commercial benefits, for ethics concerns or regulatory considerations, X)(AI) is essential if users understand, appropriately trust, and effectively manage AI results. In researching this topic, I was surprised to find almost 400 papers on the subject.
Sep-23-2020, 00:30:15 GMT