Why Not? Explaining Missing Entailments with Evee (Technical Report)

Alrabbaa, Christian, Borgwardt, Stefan, Friese, Tom, Koopmann, Patrick, Kotlov, Mikhail

arXiv.org Artificial Intelligence 

We present a Protégé plugin for explaining missing entailments from OWL ontologies. The importance of explaining description logic reasoning to end-users has long been understood, and has been studied in many forms over the past decades. Indeed, explainability is one of the main advantages of logic-based knowledge representations over sub-symbolic methods. The first approaches to explain why a consequence follows from a Description Logic (DL) ontology were based on step-by-step proofs [8, 18], but soon research focused on justifications [7, 11, 20] that are easier to compute, but still very useful for pointing out the axioms responsible for an entailment. Consequently, the ontology editor Protégé supports black-box methods for computing justifications for arbitrary OWL DL ontologies [12]. More recently, a series of papers investigated different methods of computing good proofs for entailments in DLs ranging from EL to ALCOI [13, 1, 2, 3], and the Protégé plug-ins proof-explanation [13] and Evee [4], as well as the web-based application Evonne [19], were developed to make these algorithms available to ontology engineers. While reasoning can sometimes reveal unexpected entailments that need explaining, very often the problem is not what is entailed, but what is not entailed. In order to explain such missing entailments, and offer suggestions on how to repair them, both counterexamples and abduction have been suggested in the literature.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found