Explanation-Based Approximate Weighted Model Counting for Probabilistic Logics

AAAI Conferences

Probabilistic inference can be realized using weighted model counting. Despite a lot of progress, computing weighted model counts exactly is still infeasible for many problems of interest, and one typically has to resort to approximation methods. We contribute a new bounded approximation method for weighted model counting based on probabilistic logic programming principles. Our bounded approximation algorithm is an anytime algorithm that provides lower and upper bounds on the weighted model count. An empirical evaluation on probabilistic logic programs shows that our approach is effective in many cases that are currently beyond the reach of exact methods. (To be published at AAAI14)


Explanation-Based Approximate Weighted Model Counting for Probabilistic Logics

AAAI Conferences

Probabilistic inference can be realized using weighted model counting. Despite a lot of progress, computing weighted model counts exactly is still infeasible for many problems of interest, and one typically has to resort to approximation methods. We contribute a new bounded approximation method for weighted model counting based on probabilistic logic programming principles. Our bounded approximation algorithm is an anytime algorithm that provides lower and upper bounds on the weighted model count. An empirical evaluation on probabilistic logic programs shows that our approach is effective in many cases that are currently beyond the reach of exact methods.


A Model for Graded Levels of Generalizations in Intensional Query Answering

AAAI Conferences

We describe how intensional answers descriptions can be generated when the set of extensional answers set, for a given natural language question, is very large. We develop a variable-depth intensional calculus that allows for the generation of intensional responses at the best level of abstraction.


INDED: A Symbiotic System of Induction and Deduction

AAAI Conferences

We present an implementation of stable inductive logic programming (stable-ILP) [Sei97], a cross-disciplinary concept bridging machine learning and nonmonotonic reasoning. In a deductive capacity, stable models give meaning to logic programs containing negative assertions and cycles of dependencies. In stable-ILP, we employ these models to represent the current state specified by (possibly) negative extensional and intensional (EDB and IDB) database rules. Additionally, the computed state then serves as the domain background knowledge for a top-down ILP learner. In this paper, we discuss the architecture of the two constituent computation engines and their symbiotic interaction in computer system INDED (pronounced "indeed"). We introduce the notion of negation as failure-to-learn and provide a real world source of negatively recursive rules (those of the form p - - p) by explicating scenarios that foster induction of such rules.


Semantic Representation

AAAI Conferences

In recent years, there has been renewed interest in the NLP community in genuine language understanding and dialogue. Thus the long-standing issue of how the semantic content of language should be represented is reentering the communal discussion. This paper provides a brief "opinionated survey" of broad-coverage semantic representation (SR). It suggests multiple desiderata for such representations, and then outlines more than a dozen approaches to SR — some long-standing, and some more recent, providing quick characterizations, pros, cons, and some comments on implementations.