On the Complexity of Global Necessary Reasons to Explain Classification

Calautti, Marco, Malizia, Enrico, Molinaro, Cristian

arXiv.org Artificial Intelligence 

Among different interesting problems in XAI, explaining classifiers' decisions has attracted significant attention[Bae+10; MI22; Mar24]. Work in this area has proposed notions to explain classifiers' behavior on a specific feature vector (instance), providing so-called local explanations, as well as notions to explain the overall classifier's behavior regardless of any particular instance, providing so-called global explanations. For both notions, a key issue is to analyze the computational complexity of the problems at hand, since this is crucial to understand how to approach the development of algorithmic solutions and their inherent limits. In the field of XAI, there has been an extensive body of work addressing complexity issues[BAK24; OPS23; Are+23; CM23; CA23; CCM23; Hua+22; Aud+22; CM22; Bar+20a; Mar+20] This paper falls within this ongoing research stream. Specifically, we deal with global explanations, and focus on so-called global necessary reasons, defined as conditions that instances must satisfy in order to be classified with a class of interest. This notion has been considered by Ignatiev, Narodytska, and Marques-Silva[INM19], where an interesting relationship with another kind of global explanation is shown.