Renkens, Joris (Katholieke Universiteit Leuven) | Kimmig, Angelika (Katholieke Universiteit Leuven) | Broeck, Guy Van den (Katholieke Universiteit Leuven) | Raedt, Luc De (Katholieke Universiteit Leuven)

Probabilistic inference can be realized using weighted model counting. Despite a lot of progress, computing weighted model counts exactly is still infeasible for many problems of interest, and one typically has to resort to approximation methods. We contribute a new bounded approximation method for weighted model counting based on probabilistic logic programming principles. Our bounded approximation algorithm is an anytime algorithm that provides lower and upper bounds on the weighted model count. An empirical evaluation on probabilistic logic programs shows that our approach is effective in many cases that are currently beyond the reach of exact methods. (To be published at AAAI14)

Renkens, Joris (KULeuven) | Kimmig, Angelika (KULeuven) | Broeck, Guy Van den (KULeuven) | Raedt, Luc De (KULeuven)

Probabilistic inference can be realized using weighted model counting. Despite a lot of progress, computing weighted model counts exactly is still infeasible for many problems of interest, and one typically has to resort to approximation methods. We contribute a new bounded approximation method for weighted model counting based on probabilistic logic programming principles. Our bounded approximation algorithm is an anytime algorithm that provides lower and upper bounds on the weighted model count. An empirical evaluation on probabilistic logic programs shows that our approach is effective in many cases that are currently beyond the reach of exact methods.

We present an implementation of stable inductive logic programming (stable-ILP) [Sei97], a cross-disciplinary concept bridging machine learning and nonmonotonic reasoning. In a deductive capacity, stable models give meaning to logic programs containing negative assertions and cycles of dependencies. In stable-ILP, we employ these models to represent the current state specified by (possibly) negative extensional and intensional (EDB and IDB) database rules. Additionally, the computed state then serves as the domain background knowledge for a top-down ILP learner. In this paper, we discuss the architecture of the two constituent computation engines and their symbiotic interaction in computer system INDED (pronounced "indeed"). We introduce the notion of negation as failure-to-learn and provide a real world source of negatively recursive rules (those of the form p - - p) by explicating scenarios that foster induction of such rules.

People often encounter objects that are perceptually indistinguishable from objects that they have seen before. When this happens, how do they decide whether the object they are looking at is something never before seen, or if it is the same one they encountered before? To identify these objects people surely use background knowledge and contextual cues. We propose a computational theory of identifying perceptually indistinguishable objects(PIOs) based on a set of experiments which were designed to identify the knowledge and perceptual cues that people use to identify PIOs. By identifying a PIO, we mean deciding which individual object is encountered, not deciding what category of objects it belongs to. In particular, identifying a PIO means deciding if the object just encountered is a new, never before seen object, or if it has been previously encountered, which previously perceived object it is. Our agent's beliefs and reasoning are based on an intensional representation (Maida & Shapiro 1982). Intensional representations model the sense (Frege 1892) of an object rather than the object referent, itself.