How to Explain your Machine Learning Predictions with SHAP Values
As stated by the author on the Github page -- "SHAP (SHapley Additive exPlanations) is a game-theoretic approach to explain the output of any machine learning model. It connects optimal credit allocation with local explanations using the classic Shapley values from game theory and their related extensions". As mentioned above, Shapley values are based on classic game theory. There are many game types such as cooperative/non-cooperative, symmetric/non-symmetric, zero-sum/non zero-sum etc. But Shapley values are based on the cooperative (coalition) game theory.
Nov-20-2020, 01:45:33 GMT