eXplainable Artificial Intelligence (XAI) in aging clock models

Kalyakulina, Alena, Yusipov, Igor, Moskalev, Alexey, Franceschi, Claudio, Ivanchenko, Mikhail

arXiv.org Artificial Intelligence 

Machine learning (ML), and deep learning (DL) in particular, is currently one of the most common data analysis approaches in applications. Deep models handle large amounts of input data, training many layers, but in most cases, their functioning is not transparent. In this regard they are often called black boxes [Saleem et al., 2022]. Decision-making process in such deep architectures is difficult to explain, raising concerns about the trustworthiness of such models and the security of their deployment. The problem of explainability of artificial intelligence (AI) models has received much attention [Baehrens et al., 2010, Lipton, 2018, Samek et al., 2017, Simonyan et al., 2014], and made eXplainable Artificial Intelligence (XAI) an important area of AI [Nauta et al., 2023]. Major goals of XAI are to develop approaches capable of uncovering the grounds behind model decision-making, and, more profoundly, to develop interpretable and logically explainable models. XAI explanations must be understandable, reliable, whereas the explained models must retain predictive accuracy [Saleem et al., 2022].

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found