An Algebraic Notion of Conditional Independence, and Its Application to Knowledge Representation (full version)

Heyninck, Jesse

arXiv.org Artificial Intelligence 

Over the last decades, conditional independence was shown to be a crucial concept supporting adequate modelling and efficient reasoning in probabilistics (Pearl, Geiger, and Verma, 1989). It is the fundamental concept underlying network-based reasoning in probabilistics, which has been arguably one of the most important factors in the rise of contemporary artificial intelligence. Even though many reasoning tasks on the basis of probabilistic information have a high worst-case complexity due to their semantic nature, network-based models allow an efficient computation of many concrete instances of these reasoning tasks thanks to local reasoning techniques. Therefore, conditional independence has also been investigated for several approaches in knowledge representation, such as propositional logic (Darwiche, 1997; Lang, Liberatore, and Marquis, 2002), belief revision (Kern-Isberner, Heyninck, and Beierle, 2022; Lynn, Delgrande, and Peppas, 2022) and conditional logics (Heyninck et al., 2023). For many other central formalisms in KR, such a study has not yet been undertaken. Due to the wide variety of formalisms studied in knowledge representation, it is often beneficial yet challenging to study a concept in a language-independent manner. Indeed, such languageindependent studies avoid having to define and investigate the same concept for different formalisms. In recent years, a promising framework for such language-independent investigations is the algebraic approximation fixpoint theory (AFT) Denecker, Marek, and Truszczyński (2003), which conceives of KR-formalisms as operators over a lattice (such as the immediate consequence operator from logic programming).