Masanari Kimura 1, Masayuki T anaka 1,2 1 National Institute of Advanced Industrial Science and Technology 2 Tokyo Institute of Technology email@example.com firstname.lastname@example.org Abstract Deep neural networks (DNNs) are known as black-box models. In other words, it is difficult to interpret the internal state of the model. Improving the interpretability of DNNs is one of the hot research topics. However, at present, the definition of interpretability for DNNs is vague, and the question of what is a highly explanatory model is still controversial. To address this issue, we provide the definition of the human predictability of the model, as a part of the interpretability of the DNNs. The human predictability proposed in this paper is defined by easiness to predict the change of the inference when perturbating the model of the DNNs. In addition, we introduce one example of high human-predictable DNNs. We discuss that our definition will help to the research of the in-terpretability of the DNNs considering various types of applications. Introduction In recent years, Deep Neural Networks (DNNs) have achieved great success in a number of tasks (Deng et al. 2009; Liu et al. 2017).