Explaining How a Neural Network Play the Go Game and Let People Learn

Zhou, Huilin, Tang, Huijie, Li, Mingjie, Zhang, Hao, Liu, Zhenyu, Zhang, Quanshi

arXiv.org Artificial Intelligence 

The AI model has surpassed human players in the game of Go [Fang et al., 2018, Granter et al., 2017, Intelligence, 2016], and it is widely believed that the AI model has encoded new knowledge about the Go game beyond human players. In this way, explaining the knowledge encoded by the AI model and using it to teach human players represent a promising-yet-challenging issue in explainable AI. To this end, mathematical supports are required to ensure that human players can learn accurate and verifiable knowledge, rather than specious intuitive analysis. Thus, in this paper, we extract interaction primitives between stones encoded by the value network for the Go game, so as to enable people to learn from the value network. Experiments show the effectiveness of our method.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found