SurvLIME: A method for explaining machine learning survival models

Kovalev, Maxim S., Utkin, Lev V., Kasimov, Ernest M.

arXiv.org Machine Learning 

Many complex problems in various applications are solved by means of deep machine learning models, in particular deep neural networks, at the present time. One of the demonstrative examples is the disease diagnosis by the models on the basis of medical images or another medical information. At the same time, deep learning models often work as black-box models such that details of their functioning are often completely unknown. It is difficult to explain in this case how a certain result or decision is achieved. As a result, the machine learning models meet some difficulties in their incorporating into many important applications, for example, into medicine, where doctors have to have an explanation of a stated diagnosis in order to choose a corresponding treatment. The lack of the explanation elements in many machine learning models has motivated development of many methods which could interpret or explain the deep machine learning algorithm predictions and understand the decisionmaking process or the key factors involved in the decision [4, 18, 35, 36]. The methods explaining the black-box machine learning models can be divided into two main groups: local methods which derive explanation locally around a test example; global methods which try to explain the overall behavior of the model. A key component of explanations for models is the contribution of individual input features. It is assumed that a prediction is explained when every feature is assigned by some number quantified its impact on the prediction.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found